-
LLM as Code Assistant
-
Create LLM based product
-
Using 3rd party LLM (ChatGPT, Gemini, Claude, etc.)
-
Using local LLM
-
Examples:
- Chat application (multimodal: text, audio and image generation)
- Youtube video summarizer
- Retrieval-Augmented Generation
-
-
Using 3rd party app (ChatGPT/Gemini etc.)
-
Using code editor/IDE plugins (Copilot/Cody/Codium etc.)
def add_item(item, shopping_list=[]):
shopping_list.append(item)
return shopping_list
list1 = add_item("apples") # ["apples"]
list2 = add_item("bananas") # ["bananas"]
print("List 1:", list1)
print("List 2:", list2)
Prompt examples:
Write a python code that tracks the CPU usage of a particular process (by PID) and graph it using matplotlib library. Track it for 30 seconds. And save the chart in chart.png file.
Prompt examples:
- Describe to me the top 5 most frequently used
<library>
functions and give me some examples. - What are the core fundamental concepts of
<framework>
, explain to me as someone who never used it before.
- Using 3rd party / closed weight LLM
- Using local LLM
- Usage examples:
- Chat application (multimodal: text, audio and image generation)
- Youtube video summarizer
- Retrieval-Augmented Generation
Closed weight LLM examples:
model
selectionmessages
parameter (system
,user
,assistant
)temperature
max_tokens
stream
- LLM is just files.
- LLM is just a git folder.
See the available open-weight LLMs here
- Base Model (foundation model), e.g:
Llama-3
,Phi-3
,OpenELM
,Mistral
- Parameter size, e.g:
7B
,8B
,70B
- Context size, e.g:
262k
,1048k
- Fine-tuned data: e.g:
instruct
,chat
,chinese-chat
See leaderboard here
Deploying LLM using Ollama
- Install ollama
- Run server:
ollama serve
- Pull model:
ollama pull llama3
- OpenAI API => Text/Image generation)
- Ollama (Text)
- Prosa TTS => [Audio generation]
Run the app:
cd app
go run ./cmd/*
Open this address
We are using:
- OpenAI API / Ollama (Summary generation)
- whisper.cpp (Audio transcription, audio -> text)
- yt-dlp (Youtube video downloader)
then open this address