Using memory of type: LocalCache
Token limit: 4000
Memory Stats: (0, (0, 1536))
Token limit: 4000
Send Token Count: 907
Tokens remaining for response: 3093
------------ CONTEXT SENT TO AI ---------------
System: The current time and date is Fri Apr 14 16:53:50 2023
System: This reminds you of these events from your past:
User: Determine which next command to use, and respond using the format specified above:
----------- END OF CONTEXT ----------------
Error: API Rate Limit Reached. Waiting 20 seconds...
Error: API Rate Limit Reached. Waiting 20 seconds...
Error: API Rate Limit Reached. Waiting 20 seconds...
Error: API Rate Limit Reached. Waiting 20 seconds...
Error: API Rate Limit Reached. Waiting 20 seconds...
Traceback (most recent call last):
File "scripts/
main.py", line 461, in <module>
main()
File "scripts/
main.py", line 365, in main
assistant_reply = chat.chat_with_ai(
File "E:\PyCharmProject\Auto-GPT\scripts\
chat.py", line 126, in chat_with_ai
assistant_reply = create_chat_completion(
File "E:\PyCharmProject\Auto-GPT\scripts\
llm_utils.py", line 50, in create_chat_completion
raise RuntimeError("Failed to get response after 5 retries")
RuntimeError: Failed to get response after 5 retries