July Updates
- Jul 3, 2025
- 3 min read
Hello and welcome back to another update from Project Ver. We have been very busy since the end of Semester One in June after all our exams, and we are very excited to share what we have been up to since the last blog post.
Firstly, we are delighted to share that we have been invited to present the Ver 2.0 at an assistive technology focus group at the end of the month hosted by Guide Dogs Australia. This is a fantastic opportunity for us to gain more valuable feedback from the low/no vision community on our device and prototype so we can ensure we are bringing real value to the community.
Since the beginning of June, we have commenced the Rapid Prototyping phase of the project. In terms of hardware prototyping, we now have initial schematics drawn up to make our own PCB version of a Raspberry Pi breakout board, the microprocessor used in the Ver 1.0. By making our own board with only the components we need, we drastically reduce the size of the Ver 2.0 helping will make it even more streamlined and portable than its predecessor. We have also selected new cameras to use in Ver 2.0 that will help produce crisper images and enable the Ver 2.0 to work no matter the environment. In order to help make Ver 2.0 smaller and more portable, we are also redesigning the power source. We have been looking at different options to integrate battery cells into the device instead of using an external power bank. Finally, for our hardware, we have begun designing the case and accessories for the Ver 2.0 and are looking forward to presenting some preliminary CAD models soon.
We also have several exciting software updates. We are continuing to explore and test a variety of large language models, both running directly on-device and through cloud-based systems like Google’s Gemini. This allows us to assess the best balance between speed, privacy, and intelligence. We are also testing different speech to text models locally to ensure accurate and responsive voice recognition in real-world conditions. In addition, we are evaluating natural sounding text to speech models to make the AI responses sound more human and engaging.
We are still experimenting with advanced features of different cloud large language models in order to expand what our system can do. To make this experience even easier for our users, we have been developing an Android app that simplifies connectivity with their headphones and allows them to interact discretely with our system by sending questions directly through the app.
Finally, we are excited to engage with another major contributor to the assistive technology space this week through Aria Research, a startup based in Sydney who are developing AI assistive glasses for people with low/no vision. We hope to learn a lot from Aria about developing AI assistive technology made for the community.
We are very pleased with the progress we have been making so far over the winter break, and we look forward to sharing more updates with you as we continue our prototyping. Until then, feel free to reach out to us in an email at projectver2025@gmail.com or contact us via our socials if you have any feedback or questions for us, we would love to collaborate with you.
The above images come from our testing of optical character recognition for reading product labels. From left to right is the original image, the canny edge detected image, and finally the cropped and brightened image
The above images come from our app development. The first two images showcasing the current interface, which is able scan for bluetooth devices in the area and can be used to connect to a pair of bluetooth headphones, and also allows users to send commands discretely. The final image shows the receiver getting the messages sent from the app.
















Comments