-
Notifications
You must be signed in to change notification settings - Fork 138
Conversation
Volume 1: Coach Alice: IntroductionTo time travel, an entity must first accelerate. The entity we now We will coach Alice and she will coach us. From our point of view Misc notes
|
37826c5
to
deddb14
Compare
More Detailed TODOs
diff --git a/dffml/util/testing/consoletest/cli.py b/dffml/util/testing/consoletest/cli.py
index 0f8294155..dd9e057c8 100644
--- a/dffml/util/testing/consoletest/cli.py
+++ b/dffml/util/testing/consoletest/cli.py
@@ -44,7 +44,7 @@ async def main(argv: List[str]) -> None:
nodes = []
for node in parse_nodes(args.infile.read()):
- if not node.options.get("test", False):
+ if args.filter is not None and not node.options.get(filter, False):
continue
if node.directive == "code-block":
nodes.append(
diff --git a/entities/alice/README.rst b/entities/alice/README.rst
index aca0dbc87..53465db6f 100644
--- a/entities/alice/README.rst
+++ b/entities/alice/README.rst
@@ -4,6 +4,18 @@ Alice
Install
*******
+Install latest known working version
+
+.. code-block:: console
+
+ $ python -m pip install \
+ "https://github.com/intel/dffml/archive/42ed3da715f1c89b4c31d705cf7f7738f17c9306.zip#egg=dffml" \
+ "https://github.com/intel/dffml/archive/42ed3da715f1c89b4c31d705cf7f7738f17c9306.zip#egg=dffml-feature-git&subdirectory=feature/git" \
+ "https://github.com/intel/dffml/archive/42ed3da715f1c89b4c31d705cf7f7738f17c9306.zip#egg=shouldi&subdirectory=examples/shouldi" \
+ "https://github.com/intel/dffml/archive/42ed3da715f1c89b4c31d705cf7f7738f17c9306.zip#egg=dffml-config-yaml&subdirectory=configloader/yaml" \
+ "https://github.com/intel/dffml/archive/42ed3da715f1c89b4c31d705cf7f7738f17c9306.zip#egg=dffml-operations-innersource&subdirectory=operations/innersource" \
+ "https://github.com/intel/dffml/archive/42ed3da715f1c89b4c31d705cf7f7738f17c9306.zip#egg=alice&subdirectory=entities/alice"
+
Install for development
.. code-block:: console
diff --git a/setup.py b/setup.py
index 47c595547..8157381f4 100644
--- a/setup.py
+++ b/setup.py
@@ -75,6 +75,7 @@ setup(
# Temporary until we split consoletest into it's own package
install_requires=[],
extras_require={
+ "consoletest-jsonpath-filter": ["jsonpath-python"],
"dev": DEV_REQUIRES,
**plugins.PACKAGE_NAMES_BY_PLUGIN_INSTALLABLE,
}, More detailed future work
|
2022-06-29 14:00 UTC -7
2022-06-30 13:00 UTC -7Triggerable Workflow for Alice Please Contribute
Failure to Launch tbDEX Stack
Cleaning Up the Docs Build
$ git log -p -- dffml/version.py \
| grep \+VERSION \
| grep -v rc \
| sed -e 's/.* = "//g' -e 's/"//g' \
| head -n 1
0.4.0
$ git log -p -- dffml/version.py | grep \+VERSION | grep -v rc
+VERSION = "0.4.0"
+VERSION = "0.3.7"
+VERSION = "0.3.6"
+VERSION = "0.3.5"
+VERSION = "0.3.4"
+VERSION = "0.3.3"
+VERSION = "0.3.2"
+VERSION = "0.3.1"
+VERSION = "0.3.0"
+VERSION = "0.2.1"
+VERSION = "0.2.0"
+VERSION = '0.2.0'
+VERSION = '0.1.2'
+VERSION = '0.1.1'
+VERSION = '0.1.0'
Refactor Meta Issue Creation to Accept Dynamic Inputs
Debugging Meta Issues as Output Operations
Verification of Successful Alice Should I Contribute Job
We needed to get the artifact uploaded with the collector results to inspect $ gh run list --workflow alice_shouldi_contribute.yml
completed success alice: ci: shouldi: contribute: Remove errant chdir to tempdir Alice Should I Contribute? alice workflow_dispatch 2639160823 1m48s 3h
completed success alice: ci: shouldi: contribute: Upload collector outputs as artifacts Alice Should I Contribute? alice workflow_dispatch 2638950785 56m14s 4h
completed success alice: ci: shouldi: contribute: Basic job Alice Please Contribute Recommended Community Standards alice workflow_dispatch 2638890594 1m15s 4h
$ gh run list --workflow alice_shouldi_contribute.yml | awk '{print $(NF-2)}'
2639160823
2638950785
2638890594
$ gh run list --workflow alice_shouldi_contribute.yml | awk '{print $(NF-2)}' | head -n 1
2639160823 We figured out how to use the GitHub CLI's builtin References: $ gh api -H "Accept: application/vnd.github+json" /repos/intel/dffml/actions/runs/$(gh run list --workflow alice_shouldi_contribute.yml | awk '{print $(NF-2)}' | head -n 1)/artifacts --jq '.artifacts[] | {(.name): .archive_download_url}'
{"collector_output":"https://api.github.com/repos/intel/dffml/actions/artifacts/293370454/zip"} Here we select only the URL of the archive. There is only one artifact, $ gh api -H "Accept: application/vnd.github+json" /repos/intel/dffml/actions/runs/$(gh run list --workflow alice_shouldi_contribute.yml | awk '{print $(NF-2)}' | head -n 1)/artifacts --jq '.artifacts[] | .archive_download_url'
https://api.github.com/repos/intel/dffml/actions/artifacts/293370454/zip Confirm it is a zip file by looking at the bytes with xxd $ gh api -H "Accept: */*" $(gh api -H "Accept: application/vnd.github+json" /repos/intel/dffml/actions/runs/$(gh run list --workflow alice_shouldi_contribute.yml | awk '{print $(NF-2)}' | head -n 1)/artifacts --jq '.artifacts[] | .archive_download_url' | sed -e 's/https:\/\/api.github.com//g') | xxd
00000000: 504b 0304 1400 0800 0800 25be e854 0000 PK........%..T..
00000010: 0000 0000 0000 0000 0000 0a00 0000 7265 ..............re
00000020: 706f 732e 6a73 6f6e cd53 5dab db30 0cfd pos.json.S]..0..
00000030: 2b25 cfed aad8 f247 fa76 5fef f39e 564a +%.....G.v_...VJ
00000040: 906d a5c9 d626 2571 584b e97f 9f1d 36d8 .m...&%qXK....6.
00000050: 18bd dc3d 6ccc 200b eb1c 593e 46ba 1773 ...=l. ...Y>F..s
00000060: 1fe9 78e4 50ec 56f7 a28d f132 edb6 db63 ..x.P.V....2...c
00000070: 17db d97d f0c3 797b 09d7 cf43 dbf7 b76d ...}..y{...C...m
00000080: 0623 4f71 617e e15b f2ef 4c58 af8a 8629 .#Oqa~.[..LX...)
00000090: ce23 4f4b f2c8 27a6 89eb af29 abeb eb0b .#OK..'....)....
000000a0: 8fdd 901f b06f e834 f17a 15c7 39ed df0f .....o.4.z..9...
000000b0: bfba 37a0 c51d 522d 9a63 3b8c f5a9 ebb9 ..7...R-.c;.....
000000c0: f643 1298 afbe 3fd6 a9f2 6b7a d9ea a50f .C....?...kz....
000000d0: 3c4e dca7 b034 f824 9ec3 3fec 3758 953f <N...4.$..?.7X.?
000000e0: c38b e5c2 49fe b98b f5d4 52d6 b92f 00ad ....I.....R../..
000000f0: 2623 83a7 408d 7352 a317 dab0 6215 a0d4 &#..@.sR....b...
00000100: 2548 d056 a8aa ca1f f427 5caf 6440 cb8c %H.V.....'\.d@..
00000110: 06ad 0343 257a cb3a 4803 440e 981c d920 ...C%z.:H.D....
00000120: 524a e62a 2d41 341e 146a acc8 2b6c 82f5 RJ.*-A4..j..+l..
00000130: 28bc 5589 a81b aca4 741a 1bf7 37b9 649d (.U.....t...7.d.
00000140: 4228 856c 2482 d724 1481 f3d2 28e6 0a50 B(.l$..$....(..P
00000150: 1963 2450 c9fe bfe0 1e12 3974 7e69 9ae7 .c$P......9t~i..
00000160: 7df6 afdd 2135 5971 a229 d6f3 2550 5ce6 }...!5Yq.)..%P\.
00000170: b510 20c4 06cc 06ec 4721 7758 edca f253 .. .....G!wX...S
00000180: d6ca d738 529e b447 5adf 0050 4b07 08de ...8R..GZ..PK...
00000190: 90d9 ba63 0100 00e3 0300 0050 4b01 0214 ...c.......PK...
000001a0: 0014 0008 0008 0025 bee8 54de 90d9 ba63 .......%..T....c
000001b0: 0100 00e3 0300 000a 0000 0000 0000 0000 ................
000001c0: 0000 0000 0000 0000 0072 6570 6f73 2e6a .........repos.j
000001d0: 736f 6e50 4b05 0600 0000 0001 0001 0038 sonPK..........8
000001e0: 0000 009b 0100 0000 00 ......... Make an authenticated query to the GitHub API asking for the resource. Pipe the TODO Operation to download all GitHub run artifacts. $ gh api -H "Accept: */*" $(gh api -H "Accept: application/vnd.github+json" /repos/intel/dffml/actions/runs/$(gh run list --workflow alice_shouldi_contribute.yml | awk '{print $(NF-2)}' | head -n 1)/artifacts --jq '.artifacts[] | .archive_download_url' | sed -e 's/https:\/\/api.github.com//g') > collector_output.zip Extract the zipfile to a directory $ python -m zipfile -e collector_output.zip collector_output/ Look at the contents of the extracted directory to confirm all the files we TODO Verification via cartographic or other trust mechanisms $ ls -lAF collector_output/
total 4
-rw-r--r-- 1 pdxjohnny pdxjohnny 995 Jul 8 20:09 repos.json Then we ask Python to use the $ python -m json.tool < collector_output/repos.json
{
"untagged": {
"https://github.com/pdxjohnny/httptest": {
"key": "https://github.com/pdxjohnny/httptest",
"features": {
"release_within_period": [
false,
true,
false,
false,
false,
true,
false,
false,
false,
false
],
"author_line_count": [
{},
{
"John Andersen": 374
},
{
"John Andersen": 37
},
{},
{},
{
"John Andersen": 51
},
{},
{},
{},
{}
],
"commit_shas": [
"0486a73dcadafbb364c267e5e5d0161030682599",
"0486a73dcadafbb364c267e5e5d0161030682599",
"c53d48ee4748b07a14c8e6d370aab0eaba8d2103",
"56302fc054649ac54fd8c42c850ea6f4933b64fb",
"56302fc054649ac54fd8c42c850ea6f4933b64fb",
"56302fc054649ac54fd8c42c850ea6f4933b64fb",
"a8b540123f340c6a25a0bc375ee904577730a1ec",
"a8b540123f340c6a25a0bc375ee904577730a1ec",
"a8b540123f340c6a25a0bc375ee904577730a1ec",
"a8b540123f340c6a25a0bc375ee904577730a1ec"
],
"dict": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
},
"last_updated": "2022-07-08T23:49:11Z",
"extra": {}
}
}
} Then we wrote this little mini tutorial by dumping our shell herstory and adding $ herstory | tail -n 50 > /tmp/recent-herstory
$ vim /tmp/recent-herstory Then we copy paste and upload to somewhere our collogues and ourselves will have Manual Spin Up of Digital Ocean VM
We will add a comment to our key whose email domain we currently $ ssh-keygen -f ~/.ssh/nahdig -b 4096 -C 'pdxjohnny@contribute.shouldi.alice.nahdig.com'
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/pdxjohnny/.ssh/nahdig
Your public key has been saved in /home/pdxjohnny/.ssh/nahdig.pub
The key fingerprint is:
SHA256:PsTjWi5ZTr3KCd2ZYTTT/Xmajkj9QutkFbysOogrWwg pdxjohnny@DESKTOP-3LLKECP
The key's randomart image is:
+---[RSA 4096]----+
| |
| . o |
| + . + |
| . . o . +.|
| E S.o +.o|
| . . =o+.=.o o.|
| . o**.=o=.o |
| ..+*o+o=o+ |
| .ooo=.ooo.o |
+----[SHA256]-----+
$ cat ~/.ssh/nahdig.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDX1xvaybJQLrUxytn+AR+F3dDeAxFDMr0dyDt6zGs45x8VsA3TDrneZZ7ggzN63Uzbk+CAuBRDPGms6FgPswliU6xgp8X073Pcn2va7JxbkIz0LQCxdzfAoOMKIIiI7SmYSD4IDFqrEHnN+I6j4el+IFLaGTibCAR0+zK4wKPX97NE27EPUL/DYkT2eBAF/onLZAQm3tRznTYUSyWaXlWHQUD6y/3QtvhH3WVIUKRV8b6POwoiVa6GeMjM5jVCBRB+nfrhjNBBp7Ro7MzuNn+z8D6puyV1GFxWtSm953UYFa5UcahhiwFRWXLfJmVjwEZDm0//hMnw1TcmapBR99dwrBftz+YFF5rpTyWvxbnl5G/wn2DQ/9RFR6SeD3GImYRhVSFkuNZkQCiaj2+bT+ngnFPEA5ed4nijFnIgvAzPz9kk7uojjW3SfEdhED0mhwwBlLNOr7pGu9+X2xZQIlFttuJaOjd+GYBWypchd7rWdURHoqR+07pXyyBAmNjy6CKwSWv9ydaIlWseCOTzKxjy3Wk81MoaH/RhBXdRFqS1mP12TuahMzTvvVuSfQQJKCO05sIrzSEykxg1u6HEZXDyeKoVwN9V1/tq3QGa4tE/WmMNaLukef9ws3Drt1D7HWTF7u/N/zjtfiyEXRAMkixqywHfCrrxXKGPR7uvueLUkQ== pdxjohnny@contribute.shouldi.alice.nahdig.com TODO(security) Use VM startup script via cloud-init or otherwise to exfil We ssh into the new VM. $ ssh -i ~/.ssh/nahdig -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PasswordAuthentication=no root@143.198.133.87 This is MUST happen, after spin up so that clients SSH'ing in POC Launch of tbDEX StackAs root, add a non-root user with root/sudo privileges and # useradd -m -s $(which bash) pdxjohnny
# usermod -aG sudo pdxjohnny Allow the user to use # sed -i 's/\%sudo\tALL=(ALL:ALL) ALL/\%sudo\tALL=(ALL:ALL) NOPASSWD:ALL/g' /etc/sudoers Update the VM # apt-get update && DEBIAN_FRONTEND=noninteractive apt-get upgrade -y Install tools # DEBIAN_FRONTEND=noninteractive apt-get install -y tmux bash-completion vim git python3 python3-pip Install GitHub CLI for auth by adding custom GitHub package repo to OS. References: # curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg
# echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
# apt-get update
# apt-get install -y gh Install Docker References: # apt-get update
# apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release
# mkdir -p /etc/apt/keyrings
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
# apt-get update
# apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin Verify docker is running by querying
Add the non-root user to the docker group # usermod -aG docker pdxjohnny Now leave the root session. Configure git by copying over creds to new user, run the following $ (cd ~ && tar -c .gitconfig .config/gh | ssh -i ~/.ssh/nahdig -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PasswordAuthentication=no pdxjohnny@143.198.133.87 tar -xv) Then log into the VM via ssh as the new user. $ ssh -i ~/.ssh/nahdig -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PasswordAuthentication=no pdxjohnny@143.198.133.87 Install dotfiles. References: $ git config --global user.name "John Andersen"
$ git config --global user.email johnandersenpdx@gmail.com
$ git clone https://github.com/pdxjohnny/dotfiles ~/.dotfiles
$ cd ~/.dotfiles
$ ./install.sh
$ echo 'source "${HOME}/.pdxjohnnyrc"' | tee -a ~/.bashrc
$ dotfiles_branch=$(hostname)-$(date "+%4Y-%m-%d-%H-%M")
$ git checkout -b $dotfiles_branch
$ sed -i "s/Dot Files/Dot Files: $dotfiles_branch/g" README.md
$ git commit -sam "Initial auto-tailor for $(hostname)"
$ git push --set-upstream origin $dotfiles_branch Close out the SSH session, SSH back into the host recording $ python -m asciinema rec --idle-time-limit 0.5 --title "$(date +%4Y-%m-%d-%H-%M-%ss)" --command "ssh -t -i ~/.ssh/nahdig -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PasswordAuthentication=no pdxjohnny@143.198.133.87 tmux" >(xz --stdout - > "$HOME/asciinema/rec-$(hostname)-$(date +%4Y-%m-%d-%H-%M-%ss).json.xz") Update Python packaging core packages $ python3 -m pip install -U pip setuptools wheel Install $ python3 -m pip install docker-compose Add the References: Find the location of the installed script directory and $ python_bin=$(python3 -c 'import os,sysconfig;print(sysconfig.get_path("scripts",f"{os.name}_user"))' | sed -e "s#${HOME}#\${HOME}#g")
$ echo "export PATH=\"\$PATH:$python_bin\"" | tee -a ~/.bashrc
$ exec bash We should now see the directory containing $ echo $PATH
/home/pdxjohnny/.local/bin:/home/pdxjohnny/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/pdxjohnny/.bin:/home/pdxjohnny/.bin:/home/pdxjohnny/.local/bin:/home/pdxjohnny/.local/bin
Collector Result Storage
Export the token as a variable within the server tmux shell session. $ export DIGITALOCEAN_ACCESS_TOKEN=asdjfojdf9j82efknm9dsfjsdf Install Python library for interfacing with DigitalOcean References:
$ pip install -U python-digitalocean
Install Python dependency to interact with Spaces API. $ pip install boto3 Go to https://cloud.digitalocean.com/account/api/tokens. DO NOT use $ export SPACES_KEY=sdfjfjasdofj0iew
$ export SPACES_SECRET=3j41ioj239012j3k12j3k12jlkj2 Write a Python script to attempt to query the space contents. upload_static_file_contents_to_space.py import os
import boto3
session = boto3.session.Session()
client = session.client(
"s3",
region_name="sfo3",
endpoint_url="https://sfo3.digitaloceanspaces.com",
aws_access_key_id=os.getenv("SPACES_KEY"),
aws_secret_access_key=os.getenv("SPACES_SECRET"),
)
response = client.list_buckets()
spaces = [space["Name"] for space in response["Buckets"]]
print("Spaces List: %s" % spaces)
# Step 3: Call the put_object command and specify the file to upload.
client.put_object(
Bucket="results-alice-shouldi-contribute", # The path to the directory you want to upload the object to, starting with your Space name.
Key="collector.txt", # Object key, referenced whenever you want to access this file later.
Body=b"{SDflkasdj}", # The object's contents.
ACL="public-read", # Defines Access-control List (ACL) permissions, such as private or public.
) Run the upload. $ python3 upload_static_file_contents_to_space.py
Spaces List: ['results-alice-shouldi-contribute'] Boto3 Source
upload_static_file_contents_to_space_asyncio.py import os
import asyncio
import aioboto3
async def main():
session = aioboto3.Session()
async with session.client(
"s3",
region_name="sfo3",
endpoint_url="https://sfo3.digitaloceanspaces.com",
aws_access_key_id=os.getenv("SPACES_KEY"),
aws_secret_access_key=os.getenv("SPACES_SECRET"),
) as client:
# Grab the list of buckets
response = await client.list_buckets()
spaces = [space["Name"] for space in response["Buckets"]]
print("Spaces List: %s" % spaces)
# Call the put_object command and specify the file to upload.
await client.put_object(
Bucket="results-alice-shouldi-contribute", # The path to the directory you want to upload the object to, starting with your Space name.
Key="collector.txt", # Object key, referenced whenever you want to access this file later.
Body=b"{SDflkasdj}", # The object's contents.
ACL="public-read", # Defines Access-control List (ACL) permissions, such as private or public.
)
if __name__ == "__main__":
asyncio.run(main()) See if it works, it does! $ python3 upload_static_file_contents_to_space_asyncio.py
Spaces List: ['results-alice-shouldi-contribute'] We now begin creating a source based off aioboto3_dffml_source.py """
Source for storing and retrieving data from S3
"""
import os
import string
import asyncio
from typing import Dict, List, AsyncIterator
import aioboto3
from dffml import (
config,
field,
Record,
BaseSourceContext,
BaseSource,
entrypoint,
export,
)
class Boto3SourceContext(BaseSourceContext):
async def update(self, record):
await self.parent.client.put_object(
Bucket=self.parent.config.bucket,
Key="".join(
[
character
for character in record.key.lower()
if character in string.ascii_lowercase
]
)
+ ".json",
Body=json.dumps(export(record)),
ACL=self.parent.config.acl,
)
async def records(self) -> AsyncIterator[Record]:
pass
async def record(self, key: str) -> Record:
return Record(key)
@config
class Boto3SourceConfig:
"""
References:
- https://aioboto3.readthedocs.io/en/latest/usage.html
"""
region_name: str
endpoint_url: str
aws_access_key_id: str
aws_secret_access_key: str
bucket: str
acl: str = field(
"Permissions level required for others to access. Options: private|public-read",
default="private",
)
@entrypoint("boto3")
class Boto3Source(BaseSource):
"""
Uploads a record to S3 style storage
"""
CONFIG = Boto3SourceConfig
CONTEXT = Boto3SourceContext
async def __aenter__(self) -> "Boto3Source":
await super().__aenter__(config)
self.session = aioboto3.Session()
self.client = await self.session.client(
"s3",
region_name=self.config.region_name,
endpoint_url=self.config.endpoint_url,
aws_access_key_id=self.config.aws_access_key_id,
aws_secret_access_key=self.config.aws_secret_access_key,
).__aenter__()
return self
async def __aexit__(self, _exc_type, _exc_value, _traceback) -> None:
await self.client.__aexit__(None, None, None)
self.client = None
self.session = None
import dffml.noasync
dffml.noasync.save(
Boto3Source(
bucket="results-alice-shouldi-contribute",
region_name="sfo3",
endpoint_url="https://sfo3.digitaloceanspaces.com",
aws_access_key_id=os.getenv("SPACES_KEY"),
aws_secret_access_key=os.getenv("SPACES_SECRET"),
acl="public-read",
),
Record(
key="https://github.com/pdxjohnny/httptest",
features={
"hello": "world",
},
),
)
overlays/alice/shouldi/contribute/upload_collector_output_to_bucket.py import os
import json
import string
import asyncio
import contextlib
import aioboto3
import aiobotocore.client
import dffml
AioBoto3Client = NewType("AioBoto3Client", aiobotocore.client.AioBaseClient)
AioBoto3RegionName = NewType("AioBoto3RegionName", str)
AioBoto3EndpointUrl = NewType("AioBoto3EndpointUrl", str)
AioBoto3AWSKeyId = NewType("AioBoto3AWSKeyId", str)
AioBoto3AWSAccessKey = NewType("AioBoto3AWSAccessKey", str)
AioBoto3AWSACL = NewType("AioBoto3AWSACL", str)
AioBoto3Bucket = NewType("AioBoto3Bucket", str)
MINIOServerShouldStart = NewType("MINIOServerShouldStart", bool)
@contextlib.asynccontextmanager
async def minio_server(
should_start: MINIOServerShouldStart,
) -> AioBoto3EndpointUrl:
# Bail out if not wanted, effectively auto start if wanted. Inclusion of this
# operation within an overlay with the current overlay mechanisms at load
# time happening in dffml_operations_innersource.cli and alice.cli for
# shouldi and please contribute results in the operation getting combined
# with the rest prior to first call to DataFlow.auto_flow.
if not should_start:
return
with tempfile.TemporaryDirectory() as tempdir:
# TODO Audit does this kill the container successfully aka clean it up
# TODO We have no logger, can we pull from stack if we are in
# MemoryOrchestrator?
async for event, result in dffml.run_command_events(
[
"docker",
"run",
"quay.io/minio/minio",
"server",
"/data",
"--console-address",
":9001",
],
events=[
dffml.Subprocess.STDOUT_READLINE,
dffml.Subprocess.STDERR_READLINE,
],
):
if (
event is dffml.Subprocess.STDOUT_READLINE
and result.startswith("API:")
):
# API: http://172.17.0.2:9000 http://127.0.0.1:9000
yield result.split()[1]
# **TODO** We have parsers for numpy style docstrings to config classes which
# can help us what was previously help field() argument.
@contextlib.asynccontextmanager
async def bucket_client_connect(
endpoint_url: AioBoto3EndpointUrl,
region_name: AioBoto3RegionName = None,
aws_access_key_id: AioBoto3AWSKeyId = None,
aws_secret_access_key: AioBoto3AWSAccessKey = None,
acl: AioBoto3AWSACL = "private",
) -> AioBoto3Client:
"""
Connect to an S3 bucket.
References:
- https://aioboto3.readthedocs.io/en/latest/usage.html
This is the short description.
This is the longer description.
Parameters
----------
acl : str
Permissions level required for others to access. Options: private|public-read
Returns
-------
str_super_cool_arg : AioBoto3Client
The aiobotocore.client.AioBaseClient object
Examples
--------
>>> async with connect_bucket_client(
... region_name: str,
... endpoint_url: str,
... aws_access_key_id: str,
... aws_secret_access_key: str,
... acl: str = "private",
... ,
... ) as client:
...
"""
session = aioboto3.Session()
async with session.client(
"s3",
region_name=config.region_name,
endpoint_url=config.endpoint_url,
aws_access_key_id=config.aws_access_key_id,
aws_secret_access_key=config.aws_secret_access_key,
) as client:
# Grab the list of buckets
response = await client.list_buckets()
buckets = [bucket["Name"] for bucket in response["Buckets"]]
print("Buckets List: %s" % buckets)
# Client initialization complete
yield client
"""
# Old style runs into the issue where how do we provide the
# config server URL dynamically? So we experimented with this
# operation based approach with objects as inputs.
@dffml.op(
inputs={"results": dffml.group_by_output},
stage=dffml.Stage.OUTPUT,
imp_enter={
"client": (lambda self: aiohttp.ClientSession(trust_env=True))
},
)
"""
async def upload_to_bucket(
client: AioBoto3Client,
bucket: AioBoto3Bucket,
repo_url: dffml_feature_git.feature.defintions.URL,
results: dffml.group_by_output,
) -> None:
await client.put_object(
Bucket=bucket,
# TODO(security) Ensure we don't have collisions
# with two different repo URLs generating the same
# filename, pretty sure the bellow code has that
# as an active issue!!!
Key="".join(
[
character
for character in repo_url.lower()
if character in string.ascii_lowercase
]
)
+ ".json",
Body=json.dumps(export(results)),
ACL=acl,
) Playing With Aysnc Context Managers as Data Flows
|
34ed532
to
ea62a6b
Compare
Moved to #1406 |
Refactoring and Thinking About Locking of Repos for Contributions
|
d2a38d4
to
90d5c52
Compare
c0a9cb6
to
37ea785
Compare
Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…run CodeNarc if not pathlib object Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…d characters on parse error Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
… trains of thought which align with strategic principles: Modify wording Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…_CLEANUP is set do not remove repo directory Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…one and using local repo .. code-block:: console $ alice shouldi contribute -log debug \ -keys local \ -inputs \ https://github.com/intel/dffml=LocalRepoURL \ $PWD=LocalRepoDirectory Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…e URL from local directory with Git repo .. code-block:: console $ DFFML_FEATURE_GIT_SKIP_CLEANUP=1 alice -log debug shouldi contribute -record-def LocalRepoDirectory -keys . Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…with nothing if found at end of repo URL Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
… not GitHub Actions by checking for runs: Keyword Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
Related: #1315 Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…sion to not use setuptools_scm while in monorepo Related: https://github.com/pypa/setuptools_scm/blob/e9cbb5a68b3ae6d5c549bda293ef60bb5ec8ec7e/src/setuptools_scm/_integration/pyproject_reading.py#L68-L73 Related: #1315 Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…nd: Isolated dynamic analysis
…ile identification Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…usness: pseudo code: From Claude AI via Alfredo Co-authored-by: Alfredo Alvarez <alfredo.g.alvarez@intel.com>
…user when creating or updating issue by title Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…when searching only for issues created by logged in user Introduced-in: 0ea349b Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…usness: Remove references to Heartwood link to SCITT ActivityPub pull request Related: scitt-community/scitt-api-emulator#37
…itigation Option: Mention SCITT notary CWT issuer as ssh keys endpoint Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…est to discussion thread and provide high level plan Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
…on 3.12 support) Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
Signed-off-by: John Andersen <johnandersenpdx@gmail.com>
ecd458f
to
ec7567f
Compare
Alice is Here! It’s the 2nd Party and everyone is invited 💃🥳. Alice is both the nickname for the Open Architecture, the methodology of description of any system architecture, as well as the entity defined using that description of architecture. She is the entity and the architecture.
alice
branch if workingmv dffml/operations/python.py operations/innersource/dffml_operations_innersource/python_ast.py
.gitpod.yml
auto start setupcode tutorial.ipynb
when donealice ask
which queries all our docs, logs, notes, issues, etc.