BPAC
 

Large Language Models (LLMs)

    grok

May seem like uses are limitless, but choose/make general areas, subcategories, research online, ask Grok
LIST USES, PUT INTO Categories 1. LIST many general 'uses' out of the box, limits on free use.
2. Don't be afraid to take the time to organize and familiarize, not only the tools, but th new LANGUAGE between human, ie 'programming or getting things done' with AI, thus need to look at API usages within an app or website or?

LIST IDEAS FOR BUSINESS:
Brainstorm business applications, ie things I want to get done, but more automated where possible. Like Kid's book, or coloring book.

LEARN:
PROMPT LANGUAGE - specific extra vocab related to subject area, like video or 3D game object and play. Had to know Blender a little or work with language to make the 3D object verbally. AI adds routines, code, selectors, speed, unbelievable. But you have to be able to verbalize the commands.
LLM APIs

AI Software Chatbots - API - Conversational AI for customer service and engagement Voice Assistants - Context-aware, multi-tasking digital companions. Avatars - Lifelike virtual characters for gaming, VR, and customer interaction. Coding - AI-assisted programming, debugging, and software development automation. windsurf - coding
* Elon Musk, Tesla - X (Twitter) - Grok
Optimus - Humanoid Robot

* Anthropic - Claude 3
- Stable Diffusion, Image Generation
- Sonnet, Amazon AWS Cloud

* OpenAI - Chat GPT

* Meta (Facebook) - Llama

DeepSeek, R1
DeepSeek GitHub
DeepSeek Hugging Face

Mistral
MosaicML Foundations - MPT-7B

Hugging Face - BLOOM
---------------------

https://chat.reka.ai/auth/login Moonshot AI, Kimi
Stability.ai
--------------------------- CHINA
Tencent - Hunyuan
ByteDance - Doubao (LLM)
Tarsier2 - a large vision-language model (LVLM)
TikTok, and Douyin (抖音), in Chinese

USE CASES

WRITE TEXT
1. Writing BOOKS
2. Song Writing

Intellectual Property (IP)
COPY (copyright)
Trademark
Slander
Libel
False Accusation

1. find classic books at least 1 year clearly in the public domain as well ass illustration. 90 years after publication
70 years following the authors death (UK)
The best works are too common, but can do anyway, especially in other languages, NICHE LANGUAGES, local languages of India
*** DOMAIN NAME SYSTEM *** search phrases, misspellings, word-order scrambles, text in the passages, main characters, location names - OTHER LANGUAGES, the CLASSICS IN THOSE LANGUAGES

The least-known works are never searched for, because nobody knows the name of either the author or the title. And we do NOT do these. ---------- AMAZON - distribution and fulfilment (printing, binding, shipping (ABE, and other Markets) ETSY - digital equivalent - T-shirts, Caps, Coffee Mugs, Greeting Cards

AI CODING, PROGRAMMING

https://app.anakin.ai/discover

Devin

Open Devin

https://chat.openai.com/

ANGLE

KIDS BOOKS BIG MOVIE TITLES PERSUASION SALES CLOSING Make America Great Save our Schools POLITICAL, USA - Republican - RED RELIGIOUS - Anti-Satanist Christian Forgive those who hurt you the most. Muslim Buddhist Hindu Don't Eat The Children
Don't Talk while you have food in your mouth.
fat Movement
I'm FAT, and It's NOT OK. Don't shame me, I need your HELP. CONSPIRACY THEORIES
ATTRIBUTES: BIG MOVIE KIDS BOOKS Goodness Family Values Clean Food Better Schools Organic Farming

Python

User Interface:

Integrated Development Environments (IDEs): These tools offer code editing, debugging, and project management. You still need Python installed, but the IDE handles running it seamlessly.

PyCharm
Thonny

LOCAL APPLICATIONS

Python installed
Ollama pulls a model
langchain
LLM libraries - LangChain, LlamaIndex, OpenAI API, Hugging Face. Open-source LLMs - LLaMA, Mistral, Falcon, Gemma. Frameworks - Ollama, LangChain, Auto-GPT, GPT-Agents are written in Python. LangChain – Framework for LLM-powered apps. LlamaIndex – Data management for AI agents. Auto-GPT – Fully autonomous AI agent. Ollama – Runs local LLMs easily. LangChain - LangChain is an open-source framework that simplifies the development of applications using large language models (LLMs). It provides a suite of tools to help developers build applications that combine language models with other resources, such as databases, APIs, and other data sources, to create powerful, flexible, and context-aware systems.

ComfyUI - open-source, user-friendly graphical interface for working with machine learning models, particularly those in the field of generative AI (such as image generation and processing). It is designed to make it easier for non-technical users to interact with complex AI models without needing to write code.

n8n - a tool for automating workflows by connecting different services, APIs, and tools. It enables you to create complex, automated workflows without writing a lot of code, often by using simple drag-and-drop interfaces. It is capable of handling data inputs, making API calls, processing results, and triggering actions across various applications.
Replit

COMPUTER HARDWARE

Intel - i7, (i9)
Ram - 32 gig

Graphics Card
NVIDIA GPU RTX 3090, RTX 4080 super, RTX 4090

GPU (Recommended): A GPU with at least 24 GB of VRAM (like an NVIDIA RTX 3090 or 4090) can offload the model for faster inference. If you use a GPU, you can reduce the RAM requirement to 32 GB, as the model weights can reside in VRAM. For a 70B model with 4-bit quantization (a common optimization), the memory requirement is 64 GB of RAM or VRAM.

CPU: multi-core CPU (AMD Ryzen 7/Ryzen 9 5900X or Intel Core i7/i9 with 6+ core. Higher core counts and faster memory bandwidth (DDR5). CPU-Only: A PC with 64 GB RAM, a decent multi-core CPU (e.g., ), and an SSD. This could run a 4-bit quantized 70B model at 1-2 tokens/second, relying on system RAM. Without a GPU, inference will lean heavily on RAM bandwidth, so a CPU with multiple memory channels (e.g., dual-channel DDR4/DDR5) helps.

GPU-Assisted: An NVIDIA RTX 3090 (24 GB VRAM), 32 GB system RAM, and a mid-tier CPU. This offloads the model to the GPU, potentially reaching 5-10 tokens/second depending on optimization. For a smoother experience (10+ tokens/second), you’d need a beefier setup, like 128 GB RAM or a GPU with 48 GB VRAM (e.g., NVIDIA A6000).

Quantization (4-bit or 8-bit precision), which reduces the memory footprint. Software like llama.cpp or Ollama can optimize the model for lower-end hardware.

Storage: 500 GB of free space, preferably on an SSD, to store the model files and ensure decent load times.
Graphics Card
High-speed internet connection