Tinkering with AI: My First Steps

Tinkering with AI: My First Steps

Lately, I’ve been exploring the realms of Artificial Intelligence (AI) and Machine Learning (ML). My interest was sparked by a project at my day job, where we identified the potential for a simple ML model to enhance our data mapping process. We often encounter straightforward cases where source values directly map to expected system values, a task easily managed with a dictionary for exact matches:

Source value -> Mapped value: Apple -> AP, Orange -> OR, Pear -> PE, Pineapple -> PI, etc.

However, challenges arise with unpredictable source values — be they misspelled words, abbreviations, or other variations:

Source value -> Mapped value: Apl -> AP, Oranje -> OR, Per -> PE, pine pple -> PI, etc.

To address this, we’re considering integrating a ML model into our data mapping service. Initially, we might employ simple algorithms like the Levenshtein distance for fuzzy matching, with plans to evolve our approach as we accumulate more data. Our goal is to refine the model to accurately predict source values, relying on user feedback to continuously improve its accuracy.

I started by seeking resources to grasp the basics and potentially kickstart a small-scale project. My search led me to insightful resources, including:

These resources, while focusing on large language models, provided valuable insights applicable to our project’s smaller scale. Encouraged by this initial research, I discovered scikit-learn, a Python library that seemed perfectly suited for our needs. My experimentation led to a basic model script, a promising start despite our current limited dataset.

This venture into AI and ML is both exciting and educational. Understanding the mechanics of these technologies firsthand has demystified the process, making it seem more accessible than ever before. It’s fascinating to see how tools like OpenAI and Google’s chatbots, powered by large language models, operate and how they might influence future projects.

My next endeavor involves experimenting with Meta’s open-source large language model, Llama. Programming, particularly with the advent of AI tools, remains an endlessly fascinating field, offering the prospect of creating intelligent systems capable of learning and evolving beyond rigid, predefined instructions.

I’ve played around with OpenAI’s APIs before, and while it was an engaging experience, the latency, overall speed, and potential costs made it less viable for me at the moment. However, accessing a powerful LLM for your application doesn’t necessarily mean relying on a private API. You can tap into open-source large language models, such as Llama, to build your chatbot, or utilize open-source libraries to develop a specialized model tailored to your needs. All it takes is a laptop, a code editor like VS Code, and some time dedicated to experimenting

Below are a few ideas I plan to work on using Llama 2. I am not planning on creating public apps, just creating scripts that give me the output I need for the following – at least that’s the start:

I want to use the power of Llama 2 to create personalized travel itineraries through Europe, perfectly tailored for the summer season. By entering preferences such as must-visit destinations, interests (like cultural landmarks, natural beauty, or culinary adventures), and any travel restrictions (think budget or time limits), I’m looking to get customized travel plans. The model would sift through extensive travel data, user reviews, and up-to-the-minute weather reports to suggest routes that promise both enjoyment and efficiency. It could also unearth less-known spots and hidden treasures, offering a travel experience that veers off the beaten path and delves deeper than the usual tourist fare.

Now, imagine an app powered by Llama 2, designed to pinpoint the best flight times and routes to reduce delays and turbulence. This app would sift through historical weather patterns, airline performance records, and upcoming weather forecasts to recommend the ideal flight bookings. Tailored for anxious flyers or anyone seeking a smoother journey, this could be a game-changer. I’m considering developing it into either an app or a website. Beyond scheduling, it could share insights on selecting seats to lessen turbulence effects and offer tips for coping with flight anxiety.

Creating this, just for the fun of it, I understand that such features might already exist within airline apps or travel websites. However, building my own version with an open-source LLM like Llama offers a chance to craft a travel tool that’s uniquely mine while delving deeper into the world of AI and ML.

Enjoy your weekend!