Arthur

arthur pizza

Llama3 Is Impressive and Small

I can see why people hate it

Often called the plagiarism machine, LLM’s like ChatGPT become popular because it’s ability to generate text that resembles what a person could output, content is clearly built from referencing others works. OpenAI even has admitted that they scraped the web for content. I’m not sure if there is a way out of this. But I’m hopeful.

At least with local LLM:

The Three Models sizes

The initial release of LLaMA-3 includes two primary variants: a smaller model with 8 billion parameters and a larger one with 70 billion parameters. These models are designed to support a wide array of applications, showcasing state-of-the-art performance across various industry benchmarks. As of the latest update, an even more extensive model with over 400 billion parameters is currently in the training phase. I will only be running the 8b model, because I’m not a rich weirdo, just a regular werido.

Models like LLaMA-3 for Tasks

I’m rarely using Llama 3 to simply write content from prompts but actually using the model to get tasks done from my work.

I already use Whisper for generating captions and transcripts, so it’s easy to take those into my workflow. I can drop the transcript of my videos into the document function if GPT4ALL and use it to generate keywords, meta descriptions, and it can be used as a spell and grammar tool.

Llama 3 in all its glory (or not). I get why people hate the idea of AI models like this one being able to generate text so easily, but at least for me, having these tools locally means I can keep my data private and still get stuff done. So for now, I’m mostly impressed.