Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/microsoft/AzureTRE into yuv…
Browse files Browse the repository at this point in the history
…alyaron-feature/1967-airlock-move-failed-requests-to-failed-state

� Conflicts:
�	api_app/_version.py
  • Loading branch information
yuvalyaron committed Aug 4, 2022
2 parents e9b3269 + 5c9ff90 commit 552bf8f
Show file tree
Hide file tree
Showing 16 changed files with 85 additions and 17 deletions.
21 changes: 20 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,24 @@

**BREAKING CHANGES & MIGRATIONS**:

*

FEATURES:

*

ENHANCEMENTS:

*

BUG FIXES:

*

## 0.4.1 (August 03, 2022)

**BREAKING CHANGES & MIGRATIONS**:

* Guacamole workspace service configures firewall requirements with deployment pipeline ([#2371](https://github.com/microsoft/AzureTRE/pull/2371)). **Migration** is manual - update the templateVersion of `tre-shared-service-firewall` in Cosmos to `0.4.0` in order to use this capability.
* Workspace now has an AirlockManager role that has the permissions to review airlock requests ([#2349](https://github.com/microsoft/AzureTRE/pull/2349)).

Expand All @@ -19,7 +37,8 @@ ENHANCEMENTS:

BUG FIXES:

* Airlock processor creates SAS tokens with _user delegated key_ ([#2382](https://github.com/microsoft/AzureTRE/pull/2376))
* Airlock processor creates SAS tokens with _user delegated key_ ([#2382](https://github.com/microsoft/AzureTRE/pull/2382))
* Script updates to work with deployment repo structure ([#2385](https://github.com/microsoft/AzureTRE/pull/2385))

## 0.4.0 (July 27, 2022)

Expand Down
2 changes: 2 additions & 0 deletions api_app/.env.sample
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ TRE_ID=mytre-dev-3142
# -------------------------
# The Cosmos DB endpoint - keep localhost if using an emulator. Otherwise https://<your_cosmos_db>.documents.azure.com:443/
STATE_STORE_ENDPOINT=https://localhost:8081
# If using local Cosmos emulator may wish to disable SSL verification. Set to false to disable SSL verification.
STATE_STORE_SSL_VERIFY=True
# The Cosmos DB key, use only with local emulator
STATE_STORE_KEY=__CHANGE_ME__
# The Cosmos DB account name
Expand Down
2 changes: 1 addition & 1 deletion api_app/_version.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.4.5"
__version__ = "0.4.7"
8 changes: 4 additions & 4 deletions api_app/api/dependencies/database.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,11 @@ def connect_to_db() -> CosmosClient:

try:
primary_master_key = get_store_key()
if config.DEBUG:
# ignore TLS(setup is pain) when on dev container and connecting to cosmosdb on windows host.
cosmos_client = CosmosClient(config.STATE_STORE_ENDPOINT, primary_master_key, connection_verify=False)
else:
if config.STATE_STORE_SSL_VERIFY:
cosmos_client = CosmosClient(config.STATE_STORE_ENDPOINT, primary_master_key)
else:
# ignore TLS (setup is a pain) when using local SSL emulator.
cosmos_client = CosmosClient(config.STATE_STORE_ENDPOINT, primary_master_key, connection_verify=False)
logging.debug("Connection established")
return cosmos_client
except Exception as e:
Expand Down
1 change: 1 addition & 0 deletions api_app/core/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@

# State store configuration
STATE_STORE_ENDPOINT: str = config("STATE_STORE_ENDPOINT", default="") # Cosmos DB endpoint
STATE_STORE_SSL_VERIFY: bool = config("STATE_STORE_SSL_VERIFY", cast=bool, default=True)
STATE_STORE_KEY: str = config("STATE_STORE_KEY", default="") # Cosmos DB access key
COSMOSDB_ACCOUNT_NAME: str = config("COSMOSDB_ACCOUNT_NAME", default="") # Cosmos DB account name
STATE_STORE_DATABASE = "AzureTRE"
Expand Down
2 changes: 2 additions & 0 deletions docs/tre-admins/environment-variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,5 @@
| `ENABLE_AIRLOCK_MALWARE_SCANNING` | If False, Airlock requests will skip the malware scanning stage. If set to True, Setting up a scanner manually is required! |
| `ENABLE_LOCAL_DEBUGGING` | Set to `false` by default. Setting this to `true` will ensure that Azure resources are accessible from your local development machine. (e.g. ServiceBus and Cosmos) |
| `PUBLIC_DEPLOYMENT_IP_ADDRESS` | The public IP address of the machine that is deploying TRE. (Your desktop or the build agents). In certain locations a dynamic script to retrieve this from [https://ipecho.net/plain](https://ipecho.net/plain) does not work. If this is the case, then you can 'hardcode' your IP. |
| `ADMIN_JUMPBOX_VM_SKU` | The SKU of the VM to use for the admin jumpbox. |
| `RESOURCE_PROCESSOR_VMSS_SKU` | The SKU of the VMMS to use for the resource processing VM. |
11 changes: 6 additions & 5 deletions e2e_tests/airlock/request.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ async def post_request(payload, endpoint, access_token, verify, assert_status):
full_endpoint, headers=auth_headers, json=payload, timeout=TIMEOUT
)

LOGGER.debug(
LOGGER.info(
f"Response Status code: {response.status_code} Content: {response.content}"
)
assert response.status_code == assert_status
Expand All @@ -41,7 +41,7 @@ async def get_request(endpoint, access_token, verify, assert_status):
response = await client.get(
full_endpoint, headers=auth_headers, timeout=TIMEOUT
)
LOGGER.debug(
LOGGER.info(
f"Response Status code: {response.status_code} Content: {response.content}"
)

Expand All @@ -64,7 +64,8 @@ async def upload_blob_using_sas(file_path: str, sas_url: str):
file_name = os.path.basename(file_path)
_, file_ext = os.path.splitext(file_name)

LOGGER.info(f"uploading {file_name} to container")
blob_url = f"{storage_account_url}{container_name}/{file_name}?{parsed_sas_url.query}"
LOGGER.info(f"uploading [{file_name}] to container [{blob_url}]")
with open(file_path, "rb") as fh:
headers = {"x-ms-blob-type": "BlockBlob"}
content_type = ""
Expand All @@ -74,12 +75,12 @@ async def upload_blob_using_sas(file_path: str, sas_url: str):
).content_type

response = await client.put(
url=f"{storage_account_url}{container_name}/{file_name}?{parsed_sas_url.query}",
url=blob_url,
files={'upload-file': (file_name, fh, content_type)},
headers=headers
)
LOGGER.info(f"response code: {response.status_code}")
return response.status_code
return response


async def wait_for_status(
Expand Down
31 changes: 29 additions & 2 deletions e2e_tests/test_airlock.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import pytest
import asyncio
import logging
import config
from resources.workspace import get_workspace_auth_details
Expand All @@ -13,10 +14,12 @@


@pytest.mark.airlock
@pytest.mark.timeout(1200)
@pytest.mark.extended
@pytest.mark.timeout(1600)
async def test_airlock_import_flow(admin_token, verify) -> None:

# 1. create workspace
LOGGER.info("Creating workspace")
payload = {
"templateName": "tre-workspace-base",
"properties": {
Expand All @@ -35,6 +38,7 @@ async def test_airlock_import_flow(admin_token, verify) -> None:
workspace_owner_token, scope_uri = await get_workspace_auth_details(admin_token=admin_token, workspace_id=workspace_id, verify=verify)

# 2. create airlock request
LOGGER.info("Creating airlock request")
payload = {
"requestType": airlock_strings.IMPORT,
"businessJustification": "some business justification"
Expand All @@ -49,19 +53,41 @@ async def test_airlock_import_flow(admin_token, verify) -> None:
request_id = request_result["airlockRequest"]["id"]

# 3. get container link
LOGGER.info("Getting airlock request container URL")
request_result = await get_request(f'/api{workspace_path}/requests/{request_id}/link', workspace_owner_token, verify, 200)
containerUrl = request_result["containerUrl"]

# 4. upload blob
await upload_blob_using_sas('./test_airlock_sample.txt', containerUrl)

# currenly there's no elagant way to check if the container was created yet becasue its an asyc process
# it would be better to create another draft_improgress step and wait for the request to change to draft state before
# uploading the blob

i = 1
blob_uploaded = False
wait_time = 30
while not blob_uploaded:
LOGGER.info(f"try #{i} to upload a blob to container [{containerUrl}]")
upload_response = await upload_blob_using_sas('./test_airlock_sample.txt', containerUrl)

if upload_response.status_code == 404:
i += 1
LOGGER.info(f"sleeping for {wait_time} sec until container would be created")
await asyncio.sleep(wait_time)
else:
assert upload_response.status_code == 201
LOGGER.info("upload blob succeeded")
blob_uploaded = True

# 5. submit request
LOGGER.info("Submitting airlock request")
request_result = await post_request(None, f'/api{workspace_path}/requests/{request_id}/submit', workspace_owner_token, verify, 200)
assert request_result["airlockRequest"]["status"] == airlock_strings.SUBMITTED_STATUS

await wait_for_status(airlock_strings.IN_REVIEW_STATUS, workspace_owner_token, workspace_path, request_id, verify)

# 6. approve request
LOGGER.info("Approving airlock request")
payload = {
"approval": "True",
"decisionExplanation": "the reason why this request was approved/rejected"
Expand All @@ -72,4 +98,5 @@ async def test_airlock_import_flow(admin_token, verify) -> None:
await wait_for_status(airlock_strings.APPROVED_STATUS, workspace_owner_token, workspace_path, request_id, verify)

# 7. delete workspace
LOGGER.info("Deleting workspace")
await disable_and_delete_resource(f'/api{workspace_path}', admin_token, verify)
2 changes: 2 additions & 0 deletions templates/core/.env.sample
Original file line number Diff line number Diff line change
Expand Up @@ -54,4 +54,6 @@ DEPLOY_NEXUS=true
RESOURCE_PROCESSOR_TYPE="vmss_porter"
API_APP_SERVICE_PLAN_SKU_SIZE="P1v2"
APP_SERVICE_PLAN_SKU="P1v2"
ADMIN_JUMPBOX_VM_SKU="Standard_B2s"
RESOURCE_PROCESSOR_VMSS_SKU="Standard_B2s"
ENABLE_AIRLOCK_MALWARE_SCANNING=false
2 changes: 1 addition & 1 deletion templates/core/terraform/admin-jumpbox.tf
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ resource "azurerm_windows_virtual_machine" "jumpbox" {
resource_group_name = azurerm_resource_group.core.name
location = azurerm_resource_group.core.location
network_interface_ids = [azurerm_network_interface.jumpbox_nic.id]
size = "Standard_B2s"
size = var.admin_jumpbox_vm_sku
allow_extension_operations = true
admin_username = "adminuser"
admin_password = random_password.password.result
Expand Down
2 changes: 1 addition & 1 deletion templates/core/terraform/airlock/airlock_processor.tf
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ resource "azurerm_linux_function_app" "airlock_function_app" {
}

site_config {
always_on = var.enable_local_debugging ? true : false
always_on = true
container_registry_managed_identity_client_id = azurerm_user_assigned_identity.airlock_id.client_id
container_registry_use_managed_identity = true
vnet_route_all_enabled = true
Expand Down
1 change: 1 addition & 0 deletions templates/core/terraform/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,7 @@ module "resource_processor_vmss_porter" {
key_vault_id = azurerm_key_vault.kv.id
subscription_id = var.arm_subscription_id
resource_processor_number_processes_per_instance = var.resource_processor_number_processes_per_instance
resource_processor_vmss_sku = var.resource_processor_vmss_sku

depends_on = [
module.azure_monitor,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "vm_linux" {
name = "vmss-rp-porter-${var.tre_id}"
location = var.location
resource_group_name = var.resource_group_name
sku = "Standard_B2s"
sku = var.resource_processor_vmss_sku
instances = 1
admin_username = "adminuser"
disable_password_authentication = false
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ variable "app_insights_connection_string" {}
variable "key_vault_name" {}
variable "key_vault_id" {}
variable "resource_processor_number_processes_per_instance" {}
variable "resource_processor_vmss_sku" {}
variable "subscription_id" {
description = "The subscription id to create the resource processor permission/role. If not supplied will use the TF context."
type = string
Expand Down
12 changes: 12 additions & 0 deletions templates/core/terraform/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,18 @@ variable "resource_processor_type" {
type = string
}

variable "resource_processor_vmss_sku" {
type = string
default = "Standard_B2s"
description = "The SKU of the resource processor VMSS."
}

variable "admin_jumpbox_vm_sku" {
type = string
default = "Standard_B2s"
description = "The SKU of the admin jumpbox VM."
}

variable "stateful_resources_locked" {
type = bool
default = true
Expand Down
2 changes: 1 addition & 1 deletion templates/core/version.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.4.3"
__version__ = "0.4.7"

0 comments on commit 552bf8f

Please sign in to comment.