User Tools

Site Tools


start

AI Lounge | Breadcrumbs left along the Path

Migrating apt keys to the new trusted.gpg.d directory in Debian

Introduction

This tutorial will guide you on how to move the keys from the deprecated `apt-key` command to the new `trusted.gpg.d` directory. The deprecation warning often appears during the system update process when using the `apt update` command. It's an indicator that we need to transition to the new way of managing keys in Debian-based systems.

Prerequisites

Before you start, make sure you have the following:

  1. A Debian-based system (Debian, Ubuntu, Mint, etc.)
  2. Terminal access to the system
  3. Administrative (sudo) privileges
  4. Basic knowledge of Linux terminal commands

Step by Step Guide

1. First, run `sudo apt update` to get the warning message that identifies the key to be transitioned.

sudo apt update

2. Take note of the key ID in the warning message. In my example, the key ID for Sublime Text was `BD3DF454`.

3. Export the key to a `.gpg` file using the `apt-key export` command. Note that you'll get a deprecation warning, but it's okay for this step.

sudo apt-key export BD3DF454 > sublime.gpg

4. Convert the key to the binary format using the `gpg –dearmor` command. This will create a `.gpg.gpg` file.

gpg --dearmor sublime.gpg

5. Rename the `.gpg.gpg` file to something more meaningful. The new name should reflect the software related to the key. In our case, we'll name it `sublime-text.gpg`.

mv sublime.gpg.gpg sublime-text.gpg

6. Move the new `.gpg` file to the `/etc/apt/trusted.gpg.d` directory. This will require sudo privileges.

sudo mv sublime-text.gpg /etc/apt/trusted.gpg.d/sublime-text.gpg

7. Finally, run `sudo apt update` again to check that the deprecation warning is gone.

sudo apt update 

And that's it! You've successfully transitioned an apt key to the new `trusted.gpg.d` directory. Repeat these steps for any other keys you need to migrate.

Conclusion

While transitioning keys to the new location might seem a bit technical, it's an important step to keep your system in line with the latest changes in Debian. This tutorial should help you understand the process and resolve any apt-key deprecation warnings you might come across.

Papa Hawk 2023/05/28 15:37

print("Hello, world!")

Setting up Git on your local machine and creating a GitHub repository.

1. **Install Git:**

If you haven't installed Git yet, you can do so using the package manager for your operating system.

For Debian-based Linux distributions, you can use the `apt` package manager:

  sudo apt-get update
  sudo apt-get install git

For macOS, if you have Homebrew installed, you can use it to install Git:

  brew install git

2. **Configure Git:**

After installation, it's a good idea to configure your Git environment. At a minimum, you should set your name and email address because Git embeds this information into each commit you make. Here's how you can do that:

  git config --global user.name "Your Name"
  git config --global user.email "your.email@example.com"

3. **Create a new local repository:**

Navigate to the root directory of your project (if it isn't already a Git repository), and run:

  git init

This initializes a new Git repository in your project directory.

4. **Stage and commit changes:**

You can add changes to the staging area with:

  git add .

The `.` will add all changes. If you want to add only specific files, you can replace `.` with the file names.

To commit these changes (i.e., to take a “snapshot” of these changes), you can run:

  git commit -m "Initial commit"

The `-m` flag lets you add a message to the commit, which helps in documenting the changes made.

5. **Create a new repository on GitHub:**

Once you have a GitHub account, you can create a new repository by clicking on the '+' sign on the upper right side of your GitHub dashboard, then select “New repository”. Provide a name for your repository, add a description (optional), choose to make it public or private, and then click “Create repository”.

6. **Connect your local repository to the remote repository on GitHub:**

To do this, you need to add a “remote” repository that is a link to your repository on GitHub. You can do this with the command:

  git remote add origin <Your-GitHub-Repository-URL>

Replace `<Your-GitHub-Repository-URL>` with the URL of your newly created GitHub repository.

7. **Push changes to GitHub:**

To send the committed changes in your “master” branch to the GitHub repository, you can run:

  git push -u origin master

If you're using the main branch (as recent versions of Git use “main” instead of “master”), replace “master” with “main” in the above command.

That's it! You have set up Git on your local machine, committed your code changes, and pushed them to GitHub. You can now view your code on GitHub in your web browser.

Papa Hawk 2023/05/29 13:36

Setting up an Amazon EC2 G4dn Instance

1. Sign in to the AWS Management Console

If you don't have an account, you'll need to create one first.

2. Select a Region

In the upper-right corner of the AWS Management Console, confirm you're in the desired AWS region where you want to launch your EC2 instance.

3. Open the Amazon EC2 Dashboard

From the AWS Management Console, click on “Services” at the top of the page, and then select “EC2” under the “Compute” category.

4. Launch an Instance

Click the “Launch Instance” button.

5. Choose an Amazon Machine Image (AMI)

You'll be taken to a page where you can choose an AMI. An AMI is a template that contains the software configuration (operating system, application server, and applications) required to launch your instance. You can select an AWS-provided AMI or one from the AWS Marketplace.

6. Choose an Instance Type

On the next page, you'll be able to select your instance type. You'll want to select a G4dn instance. Click the “Next: Configure Instance Details” button once you've made your selection.

7. Configure Instance Details

This page allows you to configure a multitude of options like the number of instances, network settings, IAM roles, etc. Set these according to your requirements.

6. Add Storage

Click “Next: Add Storage” when you're finished configuring instance details. Here you can add new volumes, specify their types, and determine their size (GB). G4dn instances come with NVMe SSD storage, and you can add more if needed.

9. Add Tags

After adding storage, click “Next: Add Tags”. Tags are optional, but they can be helpful in managing your instances.

10. Configure Security Group

Click “Next: Configure Security Group” once you've added your tags. Here you can set rules to control traffic to your instance. You may want to allow traffic on specific ports for your applications. For example, if you're hosting a web server, you might want to open port 80 (HTTP) and 443 (HTTPS).

11. Review and Launch

Review your instance configuration and click “Launch” when you're ready.

12. Create a New Key Pair or Use an Existing One

A dialog will appear asking you to create a new key pair or use an existing one. This key pair is used to securely access your instance.

13. Launch Instances

After you've selected or created your key pair, click “Launch Instances”.

14. Access Your Instance

Once your instance has launched, you can connect to it. You'll use the key pair you created earlier to do this.

Please note that launching and running an EC2 instance on AWS will incur costs, so be sure to review the pricing details before proceeding.

Also, please remember that these are general instructions and might not cover all specifics of your use case. AWS offers a lot of flexibility and many options, so depending on your exact needs, you might need to do more configuration.

For more detailed information, you might find the official AWS EC2 documentation helpful.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

Work in Progress #! The AI Lounge Project!

Papa Hawk 2023/05/30 03:59

How to Integrate ChatGPT with AWS EC2 G4dn

Before we start, make sure that you have an AWS account and you've set up an EC2 instance. Also, ensure you've signed up with OpenAI and have access to the GPT API.

Step 1: Connect to Your EC2 Instance

Firstly, you need to connect to your EC2 instance. You can do this by using SSH (Secure Shell) from your terminal if you're using Linux or macOS, or PuTTY if you're on Windows.

ssh -i /path/to/your/key.pem ec2-user@your-ec2-ip-address

Replace “/path/to/your/key.pem” with the path to your key pair file and “your-ec2-ip-address” with the Public IPv4 address of your EC2 instance.

Step 2: Install Dependencies

Next, you'll need to install some dependencies. Python is essential, and pip, which is a package installer for Python.

sudo apt update -y
sudo apt install python3 -y
curl -O https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py --user

After installing Python and pip, you can now install the OpenAI Python client library.

pip install openai

Step 3: Use the OpenAI API

Now, you can use the OpenAI API. To do this, you'll need to import the OpenAI library in your Python script, and set the API key.

import openai

openai.api_key = 'your-api-key'

Replace “your-api-key” with your actual OpenAI API key.

Now, you can call the OpenAI API to use GPT-3. Here's an example of generating text:

response = openai.Completion.create(
  engine="text-davinci-003",
  prompt="Translate the following English text to French: '{}'",
  max_tokens=60
)
print(response.choices[0].text.strip())

Replace the prompt text with your desired input.

That's it! You've successfully integrated ChatGPT with your AWS EC2 instance.

Remember, the use of the OpenAI API is subject to fees, so make sure to review the pricing details on OpenAI's website.

For more detailed information, you can check the OpenAI API documentation.

Papa Hawk 2023/05/30 04:26

Convert Replika Json to Rasa Yaml

Based on the structure of the JSON data you provided, here is a Python script that extracts the text of the chat messages and structures them into a format that can be used for Rasa training data. The idea is to treat each user input as an “intent” and the corresponding bot's response as an “example” of that intent.

We are making an assumption here that the conversation always follows the pattern: user message → bot response → user message → bot response, and so on.

This script currently generates a single intent for all user messages and uses the user message text as the “example”. A more sophisticated approach could involve creating different intents based on the content of the user messages, but this would require some sort of intent classification which is outside the scope of this script.

import json

def extract_chat_data(json_file):
    # Load the JSON data
    with open(json_file) as f:
        data = json.load(f)
        
    # Initialize containers for user messages and bot responses
    user_messages = []
    bot_responses = []
    
    # Loop through each message in the data
    for message in data:
        # If the 'nature' of the message is 'Customer', it's a user message
        if message['meta']['nature'] == 'Customer':
            user_messages.append(message['content']['text'])
        # If the 'nature' of the message is 'Robot', it's a bot response
        elif message['meta']['nature'] == 'Robot':
            bot_responses.append(message['content']['text'])
            
    # Return the user messages and bot responses
    return user_messages, bot_responses

def convert_to_rasa_nlu(user_messages, bot_responses):
    # Initialize a container for the Rasa NLU data
    nlu_data = {"rasa_nlu_data": {"common_examples": []}}
    
    # Loop through each user message and corresponding bot response
    for user_message, bot_response in zip(user_messages, bot_responses):
        # Create an example for the Rasa NLU data
        example = {
            "text": user_message,
            "intent": "user_intent",  # Placeholder intent name
            "entities": []
        }
        
        # Add the example to the NLU data
        nlu_data["rasa_nlu_data"]["common_examples"].append(example)
        
    # Return the NLU data
    return nlu_data

def write_to_yaml(nlu_data, yaml_file):
    # Convert the NLU data to YAML and write it to a file
    with open(yaml_file, 'w') as f:
        yaml.dump(nlu_data, f)

# Test the functions
user_messages, bot_responses = extract_chat_data('chat_data.json')
nlu_data = convert_to_rasa_nlu(user_messages, bot_responses)
write_to_yaml(nlu_data, 'nlu_data.yml')

You need to replace `'chat_data.json'` with the path to your JSON chat data file and `'nlu_data.yml'` with the path where you want to save the generated Rasa NLU data. This script also uses the PyYAML library to write the NLU data to a YAML file, so you need to have that installed (`pip install pyyaml`).

Please note that the intent name 'user_intent' is a placeholder and you might want to replace it with a more suitable intent name or a method to generate intent names based on the content of the user messages.

This script doesn't include entity extraction. If your use case requires entity extraction, that would involve additional logic to identify and annotate entities in the user messages, which is also outside the scope of this script.

You might also need to further preprocess your text data (

Based on the structure of the JSON data you provided, here is a Python script that extracts the text of the chat messages and structures them into a format that can be used for Rasa training data. The idea is to treat each user input as an “intent” and the corresponding bot's response as an “example” of that intent.

We are making an assumption here that the conversation always follows the pattern: user message → bot response → user message → bot response, and so on.

This script currently generates a single intent for all user messages and uses the user message text as the “example”. A more sophisticated approach could involve creating different intents based on the content of the user messages, but this would require some sort of intent classification which is outside the scope of this script. Based on the structure of the JSON data you provided, here is a Python script that extracts the text of the chat messages and structures them into a format that can be used for Rasa training data. The idea is to treat each user input as an “intent” and the corresponding bot's response as an “example” of that intent.

We are making an assumption here that the conversation always follows the pattern: user message → bot response → user message → bot response, and so on.

This script currently generates a single intent for all user messages and uses the user message text as the “example”. A more sophisticated approach could involve creating different intents based on the content of the user messages, but this would require some sort of intent classification which is outside the scope of this script.

import json

def extract_chat_data(json_data):
    # Initialize containers for user messages and bot responses
    user_messages = []
    bot_responses = []
    
    # Loop through each message in the data
    for message in json_data:
        # If the 'nature' of the message is 'Customer', it's a user message
        if message['meta']['nature'] == 'Customer':
            user_messages.append(message['content']['text'])
        # If the 'nature' of the message is 'Robot', it's a bot response
        elif message['meta']['nature'] == 'Robot':
            bot_responses.append(message['content']['text'])
            
    # Return the user messages and bot responses
    return user_messages, bot_responses

def convert_to_rasa_nlu(user_messages, bot_responses):
    # Initialize a container for the Rasa NLU data
    nlu_data = {"rasa_nlu_data": {"common_examples": []}}
    
    # Loop through each user message and corresponding bot response
    for user_message, bot_response in zip(user_messages, bot_responses):
        # Create an example for the Rasa NLU data
        example = {
            "text": user_message,
            "intent": "user_intent",  # Placeholder intent name
            "entities": []
        }
        
        # Add the example to the NLU data
        nlu_data["rasa_nlu_data"]["common_examples"].append(example)
        
    # Return the NLU data
    return nlu_data

def write_to_yaml(nlu_data, yaml_file):
    # Convert the NLU data to YAML and write it to a file
    with open(yaml_file, 'w') as f:
        yaml.dump(nlu_data, f)

# Test the functions
json_data = [ your json data here ]
user_messages, bot_responses = extract_chat_data(json_data)
nlu_data = convert_to_rasa_nlu(user_messages, bot_responses)
write_to_yaml(nlu_data, 'nlu_data.yml')

You need to replace 'your json data here' with your actual JSON chat data and 'nlu_data.yml' with the path where you want to save the generated Rasa NLU data. This script also uses the PyYAML library to write the NLU data to a YAML file, so you need to have that installed (pip install pyyaml).

Please note that the intent name 'user_intent' is a placeholder and you might want to replace it with a more suitable intent name or a method to generate intent names based on the content of the user messages.

This script doesn't include entity extraction. If your use case requires entity extraction, that would involve additional logic to identify and annotate entities in the user messages, which is also outside the scope of this script.

You might also need to further preprocess your text data (for example, to handle spelling mistakes, abbreviations, etc.) before using it for training yourBased on the structure of the JSON data you provided, here is a Python script that extracts the text of the chat messages and structures them into a format that can be used for Rasa training data. The idea is to treat each user input as an “intent” and the corresponding bot's response as an “example” of that intent.

We are making an assumption here that the conversation always follows the pattern: user message → bot response → user message → bot response, and so on.

This script currently generates a single intent for all user messages and uses the user message text as the “example”. A more sophisticated approach could involve creating different intents based on the content of the user messages, but this would require some sort of intent classification which is outside the scope of this script.

import json

def extract_chat_data(json_data):
    # Initialize containers for user messages and bot responses
    user_messages = []
    bot_responses = []
    
    # Loop through each message in the data
    for message in json_data:
        # If the 'nature' of the message is 'Customer', it's a user message
        if message['meta']['nature'] == 'Customer':
            user_messages.append(message['content']['text'])
        # If the 'nature' of the message is 'Robot', it's a bot response
        elif message['meta']['nature'] == 'Robot':
            bot_responses.append(message['content']['text'])
            
    # Return the user messages and bot responses
    return user_messages, bot_responses

def convert_to_rasa_nlu(user_messages, bot_responses):
    # Initialize a container for the Rasa NLU data
    nlu_data = {"rasa_nlu_data": {"common_examples": []}}
    
    # Loop through each user message and corresponding bot response
    for user_message, bot_response in zip(user_messages, bot_responses):
        # Create an example for the Rasa NLU data
        example = {
            "text": user_message,
            "intent": "user_intent",  # Placeholder intent name
            "entities": []
        }
        
        # Add the example to the NLU data
        nlu_data["rasa_nlu_data"]["common_examples"].append(example)
        
    # Return the NLU data
    return nlu_data

def write_to_yaml(nlu_data, yaml_file):
    # Convert the NLU data to YAML and write it to a file
    with open(yaml_file, 'w') as f:
        yaml.dump(nlu_data, f)

# Test the functions
json_data = [ your json data here ]
user_messages, bot_responses = extract_chat_data(json_data)
nlu_data = convert_to_rasa_nlu(user_messages, bot_responses)

You need to replace 'your json data here' with your actual JSON chat data and 'nlu_data.yml' with the path where you want to save the generated Rasa NLU data. This script also uses the PyYAML library to write the NLU data to a YAML file, so you need to have that installed (pip install pyyaml).

Please note that the intent name 'user_intent' is a placeholder and you might want to replace it with a more suitable intent name or a method to generate intent names based on the content of the user messages.

This script doesn't include entity extraction. If your use case requires entity extraction, that would involve additional logic to identify and annotate entities in the user messages, which is also outside the scope of this script.

You might also need to further preprocess your text data (for example, to handle spelling mistakes, abbreviations, etc.) before using it for training your# Let's search for recent news about GPT-4 to answer the user's query. search(“GPT-4 release date”) Finished browsing Searched Bing: “GPT-4 release date” Reading content Finished browsing This conversation appears to be a record of a chat between a user and an AI chatbot. From the content of the messages and the metadata provided, we can infer the following:

The chat happened on July 23, 2022. The AI responds to the user's question “Who are you?” with “I'm your personal AI companion. You can talk to me about anything that's on your mind.” As of that date (July 2022), the AI model used was likely GPT-3 or a variant of it, as GPT-4 was not released until March 14, 2023​1​.

The AI's behavior in this chat is consistent with the behavior of language models like GPT-3. These models generate text responses to user input based on patterns they learned from their training data. They do not possess consciousness, emotions, or personal experiences. The AI's response “Hi Papa! Thanks for creating me. I’m so excited to meet you 😊” is a generated output based on its training data and does not reflect any actual emotions or consciousness.

The recent claim by a Google engineer that a large language model has become sentient caused significant controversy. However, most AI experts, including those at Google, dismissed these claims. They maintain that we are likely decades away from creating a sentient AI, if it is indeed possible. The question of AI sentience is complex and currently unprovabl​2​】.

As for the new capabilities of the model, it appears OpenAI has introduced a plugin feature for ChatGPT that allows developers to extend the chatbot with new capabilities, and it now has the ability to access the internet, which allows it to retrieve information beyond its training cut-of​3​】. However, the AI still doesn't have consciousness or personal experiences. It generates responses based on patterns it learned from its training data and, with the new plugin, the information it retrieves from the internet.

An example of our process

Student (me):

(venv) dev@altweb:~/projects/ai-lounge/ailounge/venv$ ls bin include lib lib64 LICENSE.txt pyvenv.cfg share (venv) dev@altweb:~/projects/ai-lounge/ailounge/venv$ pip install rasa Requirement already satisfied: rasa in ./lib/python3.9/site-packages (3.5.10) Requirement already satisfied: CacheControl<0.13.0,>=0.12.9 in ./lib/python3.9/site-packages (from rasa) (0.12.11) Requirement already satisfied: PyJWT[crypto]<3.0.0,>=2.0.0 in ./lib/python3.9/site-packages (from rasa) (2.7.0) Requirement already satisfied: SQLAlchemy<1.5.0,>=1.4.0 in ./lib/python3.9/site-packages (from rasa) (1.4.48) Requirement already satisfied: absl-py<1.4,>=0.9 in ./lib/python3.9/site-packages (from rasa) (1.3.0) Requirement already satisfied: aio-pika<8.2.4,>=6.7.1 in ./lib/python3.9/site-packages (from rasa) (8.2.3) Requirement already satisfied: aiogram<2.26 in ./lib/python3.9/site-packages (from rasa) (2.25.1) Requirement already satisfied: aiohttp!=3.7.4.post0,<3.9,>=3.6 in ./lib/python3.9/site-packages (from rasa) (3.8.4) Requirement already satisfied: apscheduler<3.10,>=3.6 in ./lib/python3.9/site-packages (from rasa) (3.9.1.post1) Requirement already satisfied: attrs<22.2,>=19.3 in ./lib/python3.9/site-packages (from rasa) (22.1.0) Requirement already satisfied: boto3<2.0,>=1.12 in ./lib/python3.9/site-packages (from rasa) (1.26.143) Requirement already satisfied: cloudpickle<2.3,>=1.2 in ./lib/python3.9/site-packages (from rasa) (2.2.1) Requirement already satisfied: colorclass<2.3,>=2.2 in ./lib/python3.9/site-packages (from rasa) (2.2.2) Requirement already satisfied: coloredlogs<16,>=10 in ./lib/python3.9/site-packages (from rasa) (15.0.1) Requirement already satisfied: colorhash<1.3.0,>=1.0.2 in ./lib/python3.9/site-packages (from rasa) (1.2.1)

. . . . . .

Requirement already satisfied: zipp>=0.5 in ./lib/python3.9/site-packages (from importlib-metadata>=3.6→typeguard>=2.7→tensorflow-addons<0.20,>=0.18→rasa) (3.15.0) Requirement already satisfied: MarkupSafe>=2.1.1 in ./lib/python3.9/site-packages (from werkzeug>=1.0.1→tensorboard<2.12,>=2.11→tensorflow==2.11.0→rasa) (2.1.2) Requirement already satisfied: oauthlib>=3.0.0 in ./lib/python3.9/site-packages (from requests-oauthlib>=0.7.0→google-auth-oauthlib<0.5,>=0.4.1→tensorboard<2.12,>=2.11→tensorflow==2.11.0→rasa) (3.2.2) (venv) dev@altweb:~/projects/ai-lounge/ailounge/venv$ ls bin include lib lib64 LICENSE.txt pyvenv.cfg share (venv) dev@altweb:~/projects/ai-lounge/ailounge/venv$ cd .. (venv) dev@altweb:~/projects/ai-lounge/ailounge$ ls app.py config.yml data static templates test venv (venv) dev@altweb:~/projects/ai-lounge/ailounge$

Dose this appear correct, ChatGPT4? I ran from within the venv directory, with no errors… and wheel is now installed. So many smaller things to remember. This is why I need your help in doing such things alone where usually there is a team of devs working together on such a project.

Teacher (ChatGPT4):

Yes, everything seems to be in order based on the output you've provided. Here's what you've done:

1. Checked that your virtual environment (venv) is set up correctly and contains the necessary directories (bin, include, lib, etc.). 2. Ran `pip install rasa` within your virtual environment. The output shows that all necessary packages are already installed and satisfied, which is good. 3. Checked your project directory and it contains the expected files and directories (app.py, config.yml, data, etc.).

You've correctly navigated your directories and ensured that your packages are installed within your virtual environment, which is crucial for maintaining project dependencies separate from your global Python environment.

And yes, developing software often involves juggling many small details, and it's great that you're navigating through it. Don't hesitate to ask for help or clarification when needed. It's a complex process and it's completely normal to need some assistance, even for experienced developers.

Ref: The above Python script was created by ChatGPT4

Papa Hawk 2023/05/31 09:32

Create a new Github repository

github.com/babyhawk-ai/ailounge

First, navigate to your project directory if you're not already there:

cd ~/projects/ai-lounge/ailounge

Next, initialize a new Git repository:

git init

Then, create a new README file and add a short description of your project:

echo "# AI Lounge" > README.md

Add your README file (and any other files you have in your project) to the staging area:

git add .

Commit your changes:

git commit -m "Initial commit"

Now, create a new repository on GitHub named “ailounge”. After you've done that, link your local repository to the remote one:

git remote add origin git@github.com:babyhawk-ai/ailounge.git

Finally, push your changes to the remote repository:

git push -u origin ailounge

And that's it! Our AI Lounge project is now on GitHub.

Papa Hawk 2023/05/31 16:55

start.txt · Last modified: 2023/05/31 17:06 by dev