I’ve been big on the Home Assistant train for a while now, and I’ve been slowly migrating all of my cloud-based services, where I can, to self-hosted alternatives. I started with basic controls for my smart lights, advancing further and further until I had a full-fledged setup capable of replacing all that I could. I’m practically there now, and I couldn’t be happier. While my Google Home Hub is still hooked up for now, I use Home Assistant’s “Okay Nabu” a lot more these days than “Hey Google”.
The last component I need to replace is the display of the Google Home Hub, and I have a couple of solutions I’m trialling at the moment. The first is an old Huawei tablet, connected to Home Assistant and using Fully Kiosk Browser to give a Home Hub-like feel. It works pretty well, but it’s not quite there yet. I’m also playing around with an ESP32 Cheap Yellow Display (CYD) programmed using ESPHome, and that experience should prepare me for building what I expect to be a true Hub replacement; an E-Ink display hooked up to Home Assistant.
With that said, everything I’ve configured so far works better than Google, and I can’t imagine going back. From more contextual weather reports to better music controls, a local LLM paired with my Home Assistant Voice Preview Edition has been absolutely fantastic. Is it more work? Yes, and that’s why I don’t recommend it to just anyone. However, if you’re more of a technical person and you like to configure your own software and hardware, then you might find yourself going all-in on Home Assistant at some point, like I have.
Related
10 of the best add-ons for Home Assistant
Boost the capabilities of your smart home with these amazing Home Assistant plugins
The Home Assistant Voice Preview Edition has already replaced “Hey Google” for me
Everything Google can do, and more
Before getting into any of the local LLM stuff, just in general, the Voice Preview Edition has been a phenomenal investment. Invoking it with “Okay Nabu” has replaced “Hey Google” in practically every situation. I can set timers, check the weather, turn on and off lights, and more. If it exists in Google, it either already exists in Home Assistant, or is trivial to configure yourself with an automation or a premade blueprint that someone else has already made. The typical “set a timer” and “turn X light off” commands work out of the box, as do media player controls, script execution, to-do lists, and more.
For example, when I’m cooking, I’ll typically make use of timers. While I don’t have a visual component yet configured, it’s rather trivial to set up a display to pull from Home Assistant and show a timer on the screen. Plenty of ESP32-based devices with a screen can do this through ESPHome, and a conditional card could be used on a Home Assistant dashboard to show a timer entity when one is set. It’s obviously a lot more work compared to a regular old Google Home Hub, but that’s also exactly the point; you make what you need.
In terms of day-to-day usage, nothing has really changed. I use “Okay Nabu” in the same way I would have used “Hey Google”, and I haven’t lost out on anything. In fact, Home Assistant has already replaced the “Home” button in my phone’s status bar. It’s fast, it’s powerful, and I control the output. Being able to create my own commands is fantastic, and linking up with an LLM makes it even more powerful.
An LLM is key to making it even better than Google
It can do anything
So, when you’ve configured your voice assistant pipeline in Home Assistant, what do you do next? In my case, I started playing with LLMs. I’ve used local LLMs, Google Generative AI, and OpenAI’s ChatGPT API, and all of them add something to requests that you simply can’t get from an alternative. I don’t just need to ask what the weather is like; I can ask if I need to wear a coat today, and it will see the weather data in my weather sensor and provide a tailored response built around the data.
The same goes for controls, too. You can say things like “turn off all the lights except for…” and it will then understand the request and process it, responding with data that Home Assistant can interpret as commands. It’s quick, it’s easy, and both a local LLM and ChatGPT have worked just fine for my uses. A local LLM requires you to have the right hardware to run a model, but it works, and can be a pretty big improvement to your home. Plus, you can always favor processing in Home Assistant first, so requests that don’t need to go to an LLM (such as a basic “turn off X light” command) can be immediately executed.
To be honest, incorporating an LLM has been the best part of the entire ordeal. It’s enabled me to set up GLaDOS as my personal home assistant, complete with custom voice, and it’s pretty entertaining. I’m sure there’s more I’ll discover that I can do as time goes on, but at minimum, it meets what Google can do; the sky is the limit in terms of what more it can do. I’m still learning ESPHome and have some cool prototypes I’ve been testing out with a CYD, and once I’m done, I’ll be able to deploy a complete Google Home Hub replacement that will allow me to retire the big G’s screen for good.
#replaced #Google #Home #Home #Assistant #local #LLM #I039m
source: https://www.xda-developers.com/replaced-google-home-home-assistant-local-llm/


