DeepTalks

DeepTalks: A Personal AI Companion

Apr 7, 2025

DeepTalks - an AI-powered chatbot! Before you roll your eyes and think, “Great, another chatbot,” hang on and hear me out.

I’d been itching to work with a pre-trained model and customize it to my needs. After letting my brain stew on ideas, DeepTalks was born.


Why Another Chatbot?

You might be thinking, “But there are already ChatGPT, Grok, Claude, even Character.ai for persona-driven conversations—so why DeepTalks?”

Here’s my take:

  • A Personal Companion: I wanted someone I could talk to anytime—an unbiased listener who wouldn’t leak or monetize my chats.
  • Privacy First: Your data stays in your own Supabase database, not someone else’s training pipeline.
  • Instant Connection: No more doom-scrolling or Netflix binges when you hit that “nothing’s happening” lull.

From Loneliness to Launch

I’ll be honest—the seed for DeepTalks sprouted from loneliness. You finish work, glance at your phone, and… crickets. So you start scrolling or streaming, and an intended 5-minute break eats up an hour. Sound familiar?

What if, instead, you could:

  • Spill your thoughts to DeepTalks
  • Ask for an opinion on that tricky decision
  • Brainstorm ideas for your next side project

— all without worrying about your privacy or data security?


Building the Brain: Challenges and Solutions

Like any cool project, DeepTalks hit a few roadblocks:

  1. Training Data
  2. Compute Power
  3. Model Strategy

1. Crafting the Dataset

  • Student–Teacher Approach: I tried using a Llama 7B model to generate synthetic conversations for a lighter 2B model.
  • Colab vs. Kaggle: Colab’s free GPUs timed out after 2–3 hours, nuking progress. Kaggle offered 30 GPU hours weekly—goldmine! But generating quality data still ate up time.
  • Shortcuts: I even peppered ChatGPT and Claude with prompts to whip up data, but daily limits slowed me down.
  • The Winner: A Hugging Face HyperThink dataset fit my needs perfectly, letting me train Llama 2B and experiment with distillGPT. Ultimately, Phi-2 + PEFT adapters gave me crisp, context-aware responses. You can check out the exact model here: phi2-memory-lora on Hugging Face.

2. Fine-Tuning and PEFT

With Phi-2 as the backbone and phi2-memory-deeptalks adapter layered on top, DeepTalks specializes in deep, “heart-to-heart” conversations. The result? A model that remembers context, follows your moods, and offers thoughtful replies.

3. Hosting Headaches

I deployed DeepTalks on a Hugging Face Space—you can try it here: DeepTalks Space. Unfortunately, the free tier’s CPU-bound environment takes 7–10 minutes to generate each reply, making real-time chat impractical.

  • Next Steps: I’m exploring ways to host on Kaggle’s GPU runtime for personal use and researching cost-effective GPU hosting so that DeepTalks can chat at the speed of thought.

What’s Next for DeepTalks?

  • Personalization Pipeline: You can Log conversations in your Supabase and retrain the DeepTalks to know you better.
  • Performance Boost: Find the sweet spot between speed, cost, and response quality.
  • Community Feedback: Your insights are the fuel that powers improvements—drop a comment, take the polls, or even submit your own conversation data!

I’m excited to keep refining DeepTalks into a truly personal AI companion. If you have any suggestions or feedback, please let me know!


Thanks for reading!!

Maritime Anomaly Detection Attention is All You Need