Major Project
Major Project
02/10/2024 - 27/12/2024
Jonathan Wiguna Halim 0356790
Major Project
MIB
Progression
Week 01
We only got brief about what can we do in our major project, and if we have an idea, we can share and ask for thought from Miss. Anis.
Week 02
Mike:
VR Game Design: SHEPOT
https://drive.google.com/drive/folders/1oF9BKcOXRR6YCwW46mGWNTYQF_KPPiKx?usp=sharing
Discuss about using Sonic visualization as VR game mechanics
Suggested thinking beyond games and looking into social impact and serious games
Due to the stereotypical Sonic visualization in black and white, suggested going beyond just monotone colours and exploring using infrared colours Individualization design
Next Week:
To show visualization design references and link them to the gameplay mechanics next week
To confirm the scope of levels involved and a brief description of each level's content and gameplay
To provide technical research on the VR Development Kit either based on Unity or unreal and clarify the level of difficulty to produce the specific VR mechanics
Student:
Finding other gamify possibilities from sonar scanner
Testing the sonar scanner inside Unity
Mike:
Show me a few case studies that use Sonic design with Unity and AR technology
The idea so far is to create awareness for ocean floor debris detection. advised to consider something more impactful and relatable to the mass users since ocean floor preservation is not something the mess could relate to.
Suggested 4-5 ideas on Assistive technology with Sonic design
Urban planning to manage sound pollution
Urban planning to design accessible and equitable navigation for the visually impaired
Simulation design for accessible and equitable navigation planning for the visually impaired on Taylor’s University campus grounds
A gamified version of the navigation planning for the visually impaired on Taylor's campus grounds via scavenger hunts
Adaptation of 360° interactive video tours for the visually impaired, using sound skips instead of the 360° videos, wow the interactive components would rely on audio emitters or explanations
Next Week:
Tested evidence on the Unity sound detection system with audio input-output relay and data integration with either heat mapping or narrative audio output
Choose 1 of the few suggested ideas to implement the tested system
Mike:
Key Components for a VPS Unity3D Application for the Visually Impaired:
VPS Integration:
VPS technology helps pinpoint locations and environments using camera data or specialized sensors. Instead of using GPS, which might not be precise indoors, VPS uses visual markers and other environmental cues to position a user.
Unity3D could leverage VPS by integrating computer vision libraries like ARKit, ARCore, or any other compatible VPS SDKs.
The system could provide audio or haptic feedback based on the user’s position within the virtual space or physical environment.
Accessible UI/UX:
Voice Guidance: Incorporate speech synthesis to provide real-time feedback and instructions based on the user’s movements and surroundings.
Haptic Feedback: Vibrations or patterns in a handheld device to guide the user, especially when speech feedback isn't suitable (e.g., noisy environments).
Simple Interaction: Controls should be minimal and rely on gestures, voice, or limited buttons.
Contrast and Audio Cues: Any visual elements should be high contrast for users with partial vision. Use spatial audio to guide users towards key objects or directions.
User Interaction:
Environment Mapping: Use VPS to help the user understand their surroundings and guide them through spaces. This could include virtual environments where users can practice navigation or real-world scenarios where they receive navigation assistance.
Obstacle Detection and Notification: For real-world navigation, you could add object detection features to alert users of nearby obstacles, like doors, walls, or furniture, by combining VPS with object recognition tools.
Voice Commands:
Allow users to interact with the system using voice commands. They can request information about their surroundings, ask for directions, or customize settings (e.g., changing audio feedback settings).
3D Audio and Sound Design:
Leverage Unity3D’s sound system to create a 3D audio landscape that spatially represents nearby objects or navigation instructions. For example, if the user needs to turn right, they could hear an auditory cue from the right side.
Prototyping and Testing:
A good testable prototype could focus on a simple indoor environment where the system uses VPS to provide guidance to a user, helping them move from point A to point B while providing audio or haptic feedback.
You could also create simulated environments where users can experience different challenges in a controlled space, like moving around obstacles or finding specific locations.
Potential Challenges:
Precision: VPS systems can be less accurate in complex environments. Ensuring high precision in environments with poor lighting or complex architecture may require combining VPS with other technologies like LiDAR.
Real-Time Processing: The system must process information and provide feedback in real-time, ensuring that any latency doesn’t confuse or endanger the user.
Device and Sensor Compatibility: Ensuring the Unity3D project works with various mobile devices and their respective camera systems.
Next Week:
To test the phone scanned 3D Point cloud with Unity SDK for visual positioning system (VPS) as soon as possible
Student:
Main Problem:
I can’t build IOS using my windows laptop. And I already tried to use Virtual Machine to run macOS inside my laptop and it can’t run
I tried to use android emulator from pc and the camera isnt working, all black
Mike:
The testing with the AR on Unity for iOS and Android platforms didn't work as planned. Suggested to progress with VR since Jonathan has experience in developing VR in Unity.
Since it is going to be VR with Unity, immediate testing should be done to find out if the Control Systems could be replaced with Voice Prompting or Command. This is with the aim of providing accessibility to the visually impaired to experience VR walkthroughs or interactive VR content.
Week 06
Mike:
To complete the primary research questionnaire or interview with suspected 3 personas.
From the primary research data, form the top 3 personas.
Each of these personas would require to be analysed based on the user journey map. There should be 1 journey map for each of the personas. The app’s features will be based on the final row or the last row in the user journey maps.
Group the identified app’s features, and then create a table that shows the features and compares them with the top 3 competitor apps. The proposed new app should have all the features checked, whereas the top 3 competitors only have some but not all in one. If it is possible also do a price comparison of the app.
From the identified features of the new app, proceed to create the user flowchart. This flowchart should match exactly with the 3 user journey maps. The user journey map and the flowchart are the same things, they are just visualised differently. Check the flowchart until all of the user needs are met. Only when the flowchart is confirmed then proceed to the wireframe designs.
Begin the wireframe designs by referencing from visual research on all the best layout designs, navigation designs and UI systems such as the buttons, the frames etc. This visual research needs to be communicated using a mood board. Proceed to create the wireframe either with Adobe XD or Figma.
Lastly, you need to present the art direction for the final UI design for the entire. This needs to be communicated using a mood board referencing the selected Art Direction elements that you have collected from various app designs. The art direction rationale must answer or address all the primary data's needs.
Razif:
At first, Mr Razif misunderstood the app, he thought this app was like a GPS for blind people, which is why he commented that you are the one who comments the user wanted to go where instead of the user who decided.
Need to pay attention to the spatial sound, because it is the main key point of this app. If these key features are not on point it will be hard for the user to know what their surroundings are.
Zeon:
Mr Zeon asked if I had already conducted research on the blind person or not, and he suggested talking to them quickly.
He asked my target audience whether I wanted to tackle those who have fully blind eyes or impartial blind. Because if I am fully blind, I do not need to pay attention too much to the visual.
Student:
I can't do much for this week, because I have been feeling sick for a couple of days, so what I did is only try to find other blind communities in Malaysia and find it on Facebook and try to email them.
Another thing that I did was to test other prototype testing for another feature which is voice recognition for the button, because the system is very different then making objects move. These are the only two that I did before my fever came in.
Mike:
Discussed the panel’s feedback regarding misunderstanding the app’s function and features. Agreed that the app is about creating a 360 virtual tour experience for the visually impaired and nothing more.
As this is a virtual tour for the visually impaired, the spatial design that is based on sound as the navigation and interactive cue would have to be flawless. The initial prototype testing that was shown during the presentation did not address how different sounds in the spatial design would be able to guide and inform the visually impaired user on where and how to choose the desired direction and functions. These sound cues as navigation and interactive markers would need to be tested via a prototype as soon as possible with the visually impaired.
Create a prototype based on a user journey from the point of downloading the app, onboarding, and setting selections to tutorials and experience launch with the tour features; to be tested on the visually impaired as soon as possible.
Arrange for the interviews with the visually impaired users/personas as the user research.
The GUI or interface design for the sighted users will be a secondary priority.
Student:
I have managed to conduct an interview via google to meet with Society of the Blind in Malaysia on 20th of November. I did a small presentation about the project that I am working on right now which can help visually impaired people in travelling. They said that I can give the prototype as soon as I am done with it, so they can test it out.
Research about WCAG for visually impaired person
Research about places in Indonesia for me to work on with. I chose Suropati Park located in Jakarta. I choose this place because it has quite amount of monument for the user to explore and also have an easy path to navigate
I made the narration of their meaning and shape to describe each monument, and with the help of AI, I made the narration voice to make it nice and professional.
Do a prototype testing on the button for each page, and found a bug where in page 1 and page 2, there is a button with similar voice command, and the app still considers the first button to activate although through button 1, I already make it non interactable after it get clicked.
From the interview, I gained insight that they want if it is possible to add a haptic or vibrate feature inside the app, so the users get feedback that they can feel. I tried to test it on my iphone while it was connected to Unity, but it cannot display on my phone, and I'm still finding a way to test it on my phone. Last choice is to give a vibrate sound to mimic the vibration features on the phone for the prototype.
Interview Insights
It is correct just like what my secondary data said that visually impaired people are afraid to go to new places unless they get accompanied by their family members or friends to lead the way.
They preferred more to have navigation indoors or a place that GPS is hard to navigate such as monument park, because they said that the GPS for outdoors is already sufficient.
Their paint points on screen reader for image prove the insight that I got from my secondary data where developers rarely put a detailed or proper description for the screen reader to detect and read.
They preferred a less typing input for the login and signup system.
Mike:
Conducted an initial interview with the R&D representative of the Malaysian Blind Society. Explained the initial purpose of the app and also the features to the representative, which the representative was keen to lend more support.
Advised to prepare a list of fundamental questions about handling a phone in terms of how to switch on, navigate and access apps. Also, include the second list of questions that are based on the virtual tour app specifically. Once done these questions would then be used for the interview in the next session with the R&D representative.
Until confirming that the research from the interview is aligned or misaligned with new directions in place, any design whether it be technical or not should be kept on hold.
Student:
Because I can’t get a reply from the Society of Blind Malaysia, I conduct another method of research which is video learning. Turns out there are a lot of blind people giving advice and tips & tricks on how to navigate throughout their usage for phones and also PC/Laptop, and here are the insights that I get.
(Visually Impaired)
Insights:
Most visually impaired people still can perceive lights.
They have different kinds of cane, one of the types they use the most is the one that is very durable made of aluminium, because the sturdier the better. They also use cane to feel the tactile texture of the ground.
Most of them have a really good memory, so sometimes they do not need to use gps to go to some place. And if they want to go to a new place, they use google gps as a benchmark only, because sometimes their voice guidance is not 100% accurate, so blind people still need their cane to have awareness of their surroundings.
If they bring their guide dog, they have more confident when going outside
(Phone Navigation)
Insights:
They depends so much with their phone voice over features, TalkBack (Android) VoiceOver (Iphone)
The features nowadays already accessible to the blind and all of them are using it, because it is not just will voice over what text on the screen, but also override the whole navigation function of the phone
Their user journey to use the phone is almost the same with us who depend on visuals. The only difference is the time for them to learn and memorize every single function. Other than that, there is not much difference.
The usage of GPS, they do not really depend on it so much, they mostly depend on their guide dog, cane, and help from other people. And if they want to go somewhere new, they usually memorize it first using GoogleMaps.
For typing, they can use either the default keyboard or braille keyboard. For those who already remembered braille, the braille keyboard will make their typing faster, because by using the default keyboard to type one character, they need to find the character first and then click on their screen to type that specific character. However by using a braille keyboard they can just type once, swipe right to space, swipe left to delete, swipe down to enter, so it will make their typing faster.
(Others)
Insights:
Nowadays gadgets already have features built in inside their system where blind people can turn it on to make their whole interaction inputs to be sets for blind people.
Apple products have built-in features inside either Iphone or Macbook, but windows on the other hand, they need to download third party software to override the interaction inputs, so blind people need other people to download and do a setup for them.
Japan is developing a new device that visually impaired people can wear on their feet which can be connected to their phone. So when they navigate to a place and lets say they need to go left or their destination is on their right. The right foot device will give the user a vibration to indicate it.
I also tried another GPS app that blind users recommend on quora. And turns out the GPS system is not accurate, although it is nice that when I walk, the app will tell me what is on my surroundings, like what kind of building. However it really bothers me, because sometimes they speak twice for information.
(VoiceOver Self Test)
Findings:
When I try to use VoiceOver features, I do not find any problem, the only problem is only my pattion. I think this is because I used to see everything all at once, that is why I can directly click something, but by using voice over, even to trigger one button, I might need to locate the button first, which is already like 3-4 finger interactions.
When I want to open an app, Siri really help me so much, because it directly open and without any touch input, but I realize that for me to open app using Siri, I need to remember the app name, that is why for my app, I need to come up with a name that is easy to say and pronounce and No other app have a similar pronunciation.
I tried to use their image voice over feature, but it does not really describe in detail what kind of image that I am selecting at the moment. I tested this on instagram, but when I tried to test it out searching something in google, it did not describe the image.
I try to swipe an image like in instagram image carousel, but I can’t find the gesture to it, either in their gestures library or in youtube. So from this insight, if I have a carousel of a content inside my app, it needs to have a next and previous button inside the carousell.
UI/UX design
Insights:
I need to have a heading for each section
I need to have a next & previous button for carousell content
It will be nice if user do not need to type something to find things, because typing is a hassle interactions that need a lot of touch input
Source:
How Blind People Use Technology (My Apple Products - An Introduction to Voice Over)
How I use technology as a blind person! - Molly Burke (CC)
How Blind People Text On Their Phones
How A Blind Person Travelshttps://youtube.com/shorts/b-anAxOzaZE?si=qlzbx1Z40kzT_chZ
Mike:
The progress has been justified by comparing the relevance of a travelling app for the sighted target user with the visually impaired. The replacement of sight with voice-over narrated descriptions is likened to the text readers' approach to explaining images for the visually impaired. The idea of creating an immersive representation of a certain unique location for the visually impaired by using voice-over narration and sound design is justifiable.
To move forward, the user experience for the visually impaired would need to be explained in a step-by-step process through various scenarios of tourist locations. This user experience will then be aligned or mapped with the navigation design that is specific to the visually impaired; for instance the fixed location spots, the 360 turn-around navigation design, the voice-over explanation or description of what should be ‘seen’ by the visually impaired, the multi-layer interactive content access either to graphics, images, texts or videos.
All the above-mentioned needs to be streamlined and connected by next week.
Mike:
Had completed the user experience step-by-step process from the app launch right up to the first destination. The user flow and the navigation are activated through voice input. The design seems practical and the images while the video content relies on a screen reader approach are in place.
When it comes to video content or graphics and images, the sound design should have both the background or environment sound complemented by objects or event sounds that communicate the story or action of these videos or graphics and images.
Advised to create a prototype to quickly test it on the visually impaired via Zoom call. As this prototype is not produced in Unity but Figma, the interaction via sound input and also the navigation would need to be simulated by manually selecting the interactive selections as though it is done by the users using voice input. This testing is crucial as the entire navigation and user flow is dependent on the accuracy and practicality of this design.
Student:
Task Done:
I am looking and brainstorming on the name so that when people pronounce it correctly, Siri or any other voice assistant can recognize it. And I come up with these names:
Navis ( from word navigation )
Voyce
Make the brand identity ( Art direction, Logo )
I found these groups on Facebook where there are a lot of visually impaired people who can give insights, and thankfully there is a person who wants to do the test.
Feedback:
3 stars for the overall experience, it is hard because they said that although I said that this is a virtual world, the sound is not really spatial, but if later on this app gets developed even more it might make the overall experience even more immersive
3 stars for the clarity and quality of the audio, because it is different than when using VoiceOver feature where they can read word by word, but as for the button it is clear, and the naming really gives a guide for the purpose of the function
4 stars for the understanding of the app’s purpose, as long as the developer gives the label for each button or each content that can be interacted with, it will help so much
Might be not a struggle but it is a unique experience when there is a narration of the paragraph speaking then there is a sound that supports it, because VoiceOver do not have these features, so maybe because it is new
Vibration feature might be a good feature that can be added
As for now, the app cannot be said user-friendly because this is still a prototype, but it moving towards a good way
Mike:
The progress has passed the testing with a Figma prototype on a visually impaired tester and gathered feedback that is majority positive which means that the project can now progress to Unity for the final high-fidelity prototype.
Points and requests made by the user will definitely be looked into if it is possible to be done within Unity and if it's not then it will be categorized as later development.
A reminder that the main focus of the project is on spatial sound development as an interactive and navigation feature to replace the site which will tour the experience. This core feature of the app must be delivered.
Student:
Things done:
The spatial sound:
Make a car traffic system to make it more real by generate random car with different sound, so every time user enter it will have a slightly different sound
Make a system where if a narration play, the surroundings volume will decrease to make the narration clear and when it stop, the sound will gradually come back to normal volume
Adding a walking sound whenever the user walks to the specific spot. This feature will make the user know on which state are they in
Other features:
Adding inactivity manager to detect if there is an inactivity of voice command within the time that I can determine anytime.
As I can only prototype it in pc, the vibrate features will only give a sound of vibration. However, to enhance it more, I added a voice over telling the user what kind of action is playing
I make all the assets to be as simple as possible, because this app will be launch in mobile phone
Things that is considered to add:
Adding UI panel for people who is partially blind can also read the text
Because right now when there is inactivity a panel come out with the list of commands, I am thinking of making the button to be intractable so if the voice recognition suddenly error, user can still touch the button and experience the tour
Make the 3d environment assets a bit detailed, so people who are not blind can also experience it to the fullest
Struggle:
Sometime although I got help from chatGPT, the code did not function like what I imagine, so I need to come up with a better and simple system, and this problem is what make me stuck with the progress
Comments
Post a Comment