The Assistant Comes to Life: First Dashboard, First User Flow

Up until Sprint 5, the assistant was working behind the scenes. It had brains, but no face. You could talk to its APIs, but there was nothing you could actually click on or see. This sprint changed that.

For the first time, the assistant stepped into the spotlight with a working dashboard, a login flow, and end-to-end tests that prove the whole system works from browser to database and back again. Alongside the things you can see, I also strengthened the things you can’t, like logging, error handling and database speed. These invisible supports are what stop systems from crumbling when real users arrive.

Teaching the Assistant to Understand Tone

The first big skill added was sentiment analysis. In simple terms, this is the assistant learning to sense whether text is positive, negative or neutral.

I started with a rule-based system. Think of it like a thermostat for language: if the temperature is high, the message is positive; if it’s low, it’s negative; if it sits in the middle, it’s neutral.

It’s not machine learning yet, but it’s free to run and completely predictable. In a business setting, that predictability can be more valuable than fancy algorithms. You need to be able to explain to stakeholders exactly how the system made its choice.

I connected this through a new FastAPI endpoint called /sentiment. Every result is stored in a Postgres database. That matters because dashboards aren’t about one-off answers. They’re about spotting patterns over time.

To prove it worked, I wrote integration tests. One checked the output looked right, and another made sure the data was really saved in the database. Without that, the dashboard would look fine but be running on empty.

Giving the Assistant a Face

With the backend in place, I turned to the frontend. I set up a React app, styled it with TailwindCSS and used shadcn/ui components to keep everything consistent.

This sprint wasn’t about making things look perfect. It was about building the scaffolding that future pages can slot into. Think of it like laying down the road before the cars arrive.

I also added a simple login page. Right now, it drops a dummy token into local storage. It’s a bit like giving visitors a guest pass before they get their proper ID card. In a real company, this would connect to secure identity systems like Google or Azure, but for now it lets us build and test user flows without getting stuck.

The best part is the new dashboard page. It shows sentiment counts and displays them in a bar chart. It’s simple, but it proves the end-to-end loop: you type something in, the system analyses it, stores the result, and shows it back as a chart.

Strengthening the Foundations

Behind the curtain, I added several important features that make systems more reliable:

  • CORS middleware: Like airport security, every request is checked against an approved list before it can enter.
  • Database indexing: Just like the index in a book, it lets the database jump straight to the right page instead of flipping through everything.
  • Structured logging: Each request gets a tracking number, like a parcel. If something goes wrong, I can follow its journey.
  • Global error handler: Instead of messy error screens, the system now responds in a clear and consistent way.

These don’t show up in a screenshot, but they’re what turn a fragile demo into something that can grow safely.

Testing, Testing, Testing

This sprint was also about testing:

  • Unhappy paths: I tested empty input, overly long text and even forced database errors. These confirmed the system fails safely and gives the right error codes. These are the tests that save you at 3am when something breaks.
  • Frontend smoke tests: Using Vitest, I tested the login flow. At first it failed on little things like button text and redirects. Fixing these made the setup stronger.
  • Cypress kick-off: I set up Cypress for end-to-end testing. These tests mimic a real user logging in and checking the dashboard. It’s like a dress rehearsal before opening night.

Dealing with the Hiccups

Of course, not everything was smooth. Tailwind threw errors until I fixed the config. Database rollbacks didn’t drop tables properly, so I rewrote the scripts. And the login test picked up small text mismatches that humans might miss but automation didn’t. Each bump I fixed removed a future risk.

How It Would Look in a Big Company

If this were inside a large business, a few things would be different:

  • The sentiment classifier would likely use a pre-trained model, with checks for fairness and explainability.
  • Login would be tied into secure enterprise identity systems.
  • Logs would flow into a central monitoring tool, not just the console.
  • Database speed would be tracked with alerts.
  • Tests would run on multiple browsers and devices, not just one.

Here, I’ve taken quick wins to keep things moving, but left space to grow into enterprise-grade features later.

The Moment It Came Alive

Sprint 5 was the moment the assistant opened its eyes. It can now take in text, make sense of it, store the result, and show it back to you. That full loop means the assistant is no longer just an idea, it’s a working product.

Next, Sprint 6 will move from showing the past to predicting the future, with forecasting, a proper FAQ page and the first load tests.

The assistant is starting to grow up.

Leave a comment

Trending