Professional Documents
Culture Documents
Autogpt PDF Guide 1
Autogpt PDF Guide 1
If you want to rent an affordable hosting provider to test this out, we recommend Time4VPS.
If you have any issues installing this locally or if you want to contribute to the project, please
visit the official GitHub repository. You can open a new ticket if you encounter any installation
or other problems.
Requirements
● VSCode + devcontainer: It has been configured in the .devcontainer folder and can
be used directly
● Docker
● Python 3.10 or later (instructions: for Windows)
Optional
Memory backend
● Pinecone
● Milvus
● Redis
● Weaviate
If you want the AI to speak, you will need to get the ElevenLabs Key.
To use the OpenAI API key for Auto-GPT, you need to have billing set up (paid account).
1. Make sure you have all the requirements listed above, if not, install/get them
2. Clone the repository: For this step, you need Git installed. Alternatively, you can
download the latest stable release (Source code (zip), bottom of the page).
git clone https://github.com/Significant-Gravitas/Auto-GPT.git
3. Navigate to the directory where the repository was downloaded
cd Auto-GPT
4. Make sure to use stable branch and not master because it might be broken often.
or
6. Configure Auto-GPT
○ Locate the file named .env.template in the main /Auto-GPT folder.
○ Create a copy of this file, called .env by removing the template extension.
The easiest way is to do this in a command prompt/terminal window cp
.env.template .env.
○ Open the .env file in a text editor (Files starting with a dot might be hidden by
your Operating System).
○ Find the line that says OPENAI_API_KEY=.
○ After the "=", enter your unique OpenAI API Key (without any quotes or
spaces).
○ Enter any other API keys or Tokens for services you would like to utilize.
○ Save and close the .env file.
7. You have properly configured the API Keys for your project. You can now update
some other parts of the .env file:
○ Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view
your xi-api-key using the "Profile" tab on the website.
○ If you want to use GPT on an Azure instance, set USE_AZURE to True and then
follow these steps:
■ Rename azure.yaml.template to azure.yaml and provide the
relevant azure_api_base, azure_api_version and all the
deployment IDs for the relevant models in the azure_model_map
section:
■ fast_llm_model_deployment_id - your gpt-3.5-turbo or gpt-4
deployment ID
■ smart_llm_model_deployment_id - your gpt-4 deployment ID
■ embedding_model_deployment_id - your
text-embedding-ada-002 v2 deployment ID
■ Please specify all of these values as double-quoted strings:
# Replace string in angled brackets (<>) to your own ID
azure_model_map: fast_llm_model_deployment_id:
"<my-fast-llm-deployment-id>"
■ Details can be found here: https://pypi.org/project/openai/ in the
Microsoft Azure Endpoints section and here:
https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutor
ials/embeddings?tabs=command-line for the embedding model.
Usage
# On Linux of Mac:
./run.sh start
# On Windows:
.\run.bat
Running with --help after .\run.bat lists all the possible command line arguments you can
pass.
After each action, choose from options to authorize command(s), exit the program, or
provide feedback to the AI.
Logs
You can pass extra arguments, for instance, running with --gpt3only and
--continuous mode:
NOTE: There are shorthands for some of these flags, for example -m for
--use-memory. Use python -m autogpt --help for more information
Speech Mode
Use this to use TTS (Text-to-Speech) for Auto-GPT
List of IDs with names from eleven labs, you can use the name or ID
● Rachel : 21m00Tcm4TlvDq8ikWAM
● Domi : AZnzlk1XvdvUeBnXmlld
● Bella : EXAVITQu4vr4xnSDxMaL
● Antoni : ErXwobaYiN019PkySvjV
● Elli : MF3mGyEYCl7XYWbV9V6O
● Josh : TxGEqnHWrfWFTfGW9XjX
● Arnold : VR6AewLTigWG4xSOukaG
● Adam : pNInz6obpgDQGcFmaJgB
● Sam : yoZ06aMxZJJ28mfd3POQ
Remember that your free daily custom search quota allows only up to 100 searches.
To increase this limit, you need to assign a billing account to the project to profit from
up to 10K daily searches.
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
To switch to either, change the MEMORY_BACKEND env variable to the value that you
want:
Redis Setup
CAUTION
This is not intended to be publicly accessible and lacks security measures.
Therefore, avoid exposing Redis to the internet without a password or at all
You can specify the memory index for redis using the following:
MEMORY_INDEX=<WHATEVER>
Pinecone API Key Setup
Pinecone enables the storage of vast amounts of vector-based memory, allowing for
only relevant memories to be loaded for the agent at any given time.
● PINECONE_API_KEY
● PINECONE_ENV (example: "us-east4-gcp")
● MEMORY_BACKEND=pinecone
Alternatively, you can set them from the command line (advanced):
export PINECONE_API_KEY="<YOUR_PINECONE_API_KEY>"
export PINECONE_ENV="<YOUR_PINECONE_REGION>" # e.g: "us-east4-gcp"
export MEMORY_BACKEND="pinecone"
Milvus Setup
Milvus is an open-source, highly scalable vector database to store huge amounts of
vector-based memory and provides fast relevant search.
● setup Milvus database, keep your pymilvus version and Milvus version the
same to avoid compatible issues.
○ setup by open source Install Milvus
○ or setup by Zilliz Cloud
● set MILVUS_ADDR in .env to your Milvus address host:ip.
● set MEMORY_BACKEND in .env to milvus to enable Milvus as backend.
Optional:
Weaviate is an open-source vector database. It allows to store data objects and vector
embeddings from ML-models and scales seamlessly to billion of data objects. An
instance of Weaviate can be created locally (using Docker), on Kubernetes or using
Weaviate Cloud Services. Although still experimental, Embedded Weaviate is
supported which allows the Auto-GPT process itself to start a Weaviate instance. To
enable it, set USE_WEAVIATE_EMBEDDED to True and make sure you pip install
"weaviate-client>=3.15.4".
MEMORY_BACKEND=weaviate
Memory pre-seeding
Memory pre-seeding allows you to ingest files into memory and pre-seed it before
running Auto-GPT.
# python data_ingestion.py -h
usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap
OVERLAP] [--max_length MAX_LENGTH]
Ingest a file or a directory with multiple files into memory. Make sure to set
your .env before running this script.
options:
-h, --help show this help message and exit
--file FILE The file to ingest.
--dir DIR The directory containing the files to ingest.
--init Init the memory and wipe its content (default:
False)
--overlap OVERLAP The overlap size between chunks when ingesting files
(default: 200)
--max_length MAX_LENGTH The max_length of each chunk when ingesting files
(default: 4000)
In the example above, the script initializes the memory, and ingests all files within the
Auto-Gpt/autogpt/auto_gpt_workspace/DataFolder directory into memory with an
overlap between chunks of 100 and a maximum length of each chunk of 2000.
Note that you can also use the --file argument to ingest a single file into memory and
that data_ingestion.py will only ingest files within the /auto_gpt_workspace directory.
You can adjust the max_length and overlap parameters to fine-tune the way the
documents are presented to the AI when it "recalls" that memory:
For another memory backend, we currently forcefully wipe the memory when starting
Auto-GPT. To ingest data with that memory backend, you can call the
data_ingestion.py script anytime during an Auto-GPT run.
Memories will be available to the AI immediately as they are ingested, even if ingested
while Auto-GPT is running.
Continuous Mode
Run the AI without user authorization, 100% automated. Continuous mode is NOT
recommended. It is potentially dangerous and may cause your AI to run forever or
carry out actions you would not usually authorize. Use at your own risk.
Image Generation
By default, Auto-GPT uses DALL-e for image generation. To use Stable Diffusion, a
Hugging Face API Token is required.
IMAGE_PROVIDER=sd
HUGGINGFACE_API_TOKEN="YOUR_HUGGINGFACE_API_TOKEN"
Selenium
sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>
Limitations
This experiment aims to showcase the potential of GPT-4 but comes with some
limitations:
Disclaimer
Disclaimer This project, Auto-GPT, is an experimental application and is provided
"as-is" without any warranty, express or implied. By using this software, you agree to
assume all risks associated with its use, including but not limited to data loss, system
failure, or any other issues that may arise.
The developers and contributors of this project do not accept any responsibility or
liability for any losses, damages, or other consequences that may occur as a result
of using this software. You are solely responsible for any decisions and actions taken
based on the information provided by Auto-GPT.
Please note that the use of the GPT-4 language model can be expensive due to its
token usage. By utilizing this project, you acknowledge that you are responsible for
monitoring and managing your own token usage and the associated costs. It is
highly recommended to check your OpenAI API usage regularly and set up any
necessary limits or alerts to prevent unexpected charges.
By using Auto-GPT, you agree to indemnify, defend, and hold harmless the developers,
contributors, and any affiliated parties from and against any and all claims, damages,
losses, liabilities, costs, and expenses (including reasonable attorneys' fees) arising
from your use of this software or your violation of these terms.