Skip to content
This repository has been archived by the owner on Aug 25, 2024. It is now read-only.

CONTRIBUTING: Amalgamize #1660

Closed
wants to merge 22 commits into from
Closed

CONTRIBUTING: Amalgamize #1660

wants to merge 22 commits into from

Conversation

johnandersen777
Copy link

@johnandersen777 johnandersen777 commented Aug 6, 2024

Copy link
Author

@johnandersen777 johnandersen777 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


Screenshot-DFFML-CONTRIBUTING-2024-08-06

## Abstract

Federation is the act of communicating data about multiple streams of topics. Federation enables decentralized social networking and is the act of creating, updating, deleting, and delivering notifications and content. This CONTRIBUTING document details [trust boundaries](https://threat-modeling.com/data-flow-diagrams-in-threat-modeling/) and evaluation guidelines to determine security properties of arbitrary entities so as optimally facilitate secure communication and collaboration towards shared goals while maintaining integrity of all trust boundaries of all entities engaged. These methodologies enable work on this project to progress in a self-directed nature where all contributing and reliant entities maintain agency and often distinct strategic plans, principles and values for themselves and their long term or ad-hoc formed organizations.

This document outlines best practices for poly-repo maintainers and contributors, detailing strategic plans and principles for federated code repositories. It emphasizes continuous trust evaluation, automated CI/CD workflows, and federated transparency receipts to ensure alignment with community-agreed values and strategic objectives. The document also addresses developer and maintainer expectations, federation triggers, and the integration of automated checks into CI/CD processes.

- Trust boundaries are evaluated continuously, and misalignment triggers retraining of AI models and potential blocking of data events.
- Developers and maintainers must document issues, federate forges, and adhere to CI/CD best practices for decentralized governance.
- Federation triggers and automated workflows ensure optimal notification and alignment with community values and strategic plans.

Conditions that may result in a lack of federation include:

- Misalignment with Strategic Plans: Entities that do not align with the project's strategic values, plans, and principles can be blocked from federating new data events.
- Detection of Malicious Intent: Entities suspected of malicious activities or failing to respect shared resources and time may be excluded from federation.
- Lack of Contact: When there is no contact with an entity, attempts at federation may be blocked to ensure security and integrity.

### Towards More Optimal Communication

Generic best practices for poly-repo maintainers and contributors. 👩🌟🛠️

This doc details the DFFML upstream default strategic plans and principles for entities
our code repository federates with. We treat this document as the community's default
policy, you or the organizations you are engaged with MAY apply additional context
dependent policies. Using these methodologies N entities without previous information
about each other can successfully establish trust within previously unknown execution
contexts and successfully collaborate towards shared goals (software development). These
methods foster innovation, preserve entity agency and alignment to strategic plans,
principles, and values across this poly-repo ecosystem and other concurrent repositories / work.
These methodologies are practiced to maintain trust, security, and alignment with community goals.

When there is no contact with an entity, we block all attempts at federating new data events.
This applies both directly and when detected through bill of materials (BOM) graph analysis
of past data events. This is our trigger for retraining the models, which is why it's
referred to as AI Looped In CI/CD Execution. Overlays may define additional consequences.
When overlays are used, they are added to all data event BOMs, and Trusted Computing Base (TCB)
evaluations are conducted continuously. These evaluations are retroactively invalidated if
we learn that nodes in a graph do not align with our strategic plans, principles, and values.

Copy link

@liceoa liceoa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@johnandersen777 johnandersen777 changed the title CONTRIBUTING: Alice-ize CONTRIBUTING: Amalgamize Aug 6, 2024
@johnandersen777
Copy link
Author

Sure, let's create a script that sends and receives JSON blobs, ensuring that both the request and response include top-level contents and metadata. The script will also use a JSON schema to validate the data structure. We will use the jsonschema library to validate the JSON blobs.

First, ensure you have the necessary libraries installed:

pip install openai jsonschema

Here is the Python code:

import openai
import json
from jsonschema import validate, ValidationError

# Set your OpenAI API key here
api_key = 'your-openai-api-key'
openai.api_key = api_key

# JSON schema for validation
schema = {
    "type": "object",
    "properties": {
        "contents": {"type": "string"},
        "metadata": {"type": "object"}
    },
    "required": ["contents", "metadata"]
}

def validate_json(json_data, schema):
    try:
        validate(instance=json_data, schema=schema)
        return True, ""
    except ValidationError as e:
        return False, e.message

def generate_response(input_json, model="gpt-4", max_tokens=150):
    is_valid, error_message = validate_json(input_json, schema)
    if not is_valid:
        return {"error": f"Invalid JSON input: {error_message}"}
    
    prompt = input_json["contents"]
    try:
        response = openai.Completion.create(
            engine=model,
            prompt=prompt,
            max_tokens=max_tokens,
            n=1,
            stop=None,
            temperature=0.7,
        )
        output_json = {
            "contents": response.choices[0].text.strip(),
            "metadata": input_json["metadata"]
        }
        return output_json
    except Exception as e:
        return {"error": str(e)}

if __name__ == "__main__":
    # Example input JSON
    input_json = {
        "contents": "Write a Python script that prints 'Hello, World!'",
        "metadata": {
            "client": "example_client",
            "request_id": "12345"
        }
    }
    
    # Generate response from OpenAI
    response_json = generate_response(input_json)
    
    # Print the response JSON
    print("Response JSON:\n")
    print(json.dumps(response_json, indent=4))

Instructions:

  1. Replace 'your-openai-api-key' with your actual OpenAI API key.
  2. Modify the input_json dictionary to include the prompt and any metadata you need.
  3. Run the script to send the JSON blob to the OpenAI API and receive the JSON response.
  4. The script validates both the input and output JSON blobs against the defined schema.

This ensures that the data sent and received between the client and the LLM always adheres to the expected structure.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants