Inventors:
Adrian Franks
Jennifer Hatfield
Data Oruwari
Background -
In today’s busy world, it is hard to navigate it with all senses available, not to mention if vision is impaired or it is difficult to see due to external factors. Sight is a critical asset for everyone. While there are many solutions available to visually impaired people, none of them assess the entire picture or can focus on specific dimensions of an environment, activity, or assist in the navigation process quite like this.
What differentiates this concept from wearable vision aids, such as Aira, Google Glasses, TobiiPro, is the following: The cognitive wearable can learn specific behaviors and preferences of the user. It will know routes and areas frequently traveled to guide in case of changes in the terrain. Be the eyes for a shopping experience or navigating a way through unexpected crowds.
Summary -
The Field of Vision eyewear can provide an experience for the user through the use of voice prompts, visual descriptors, and haptic alerts to guide the user through an urban metropolis or a walk in the country. Integration with other wearables and iOT devices can expand the capability and experience for the user. 

Features include:
•    Use GPS data to geo fence the user in safe zones;
•    Wayfinding with haptic feedback, prompts to ‘view’ a street scene and effectively navigate crowds;
•    Direction to navigate and complete specific tasks (such as shopping for a new sweater at a retail store of choice or convenience);
•    Assistance in avoiding crowds;
•    Learning the user preferences and behaviors to optimize timing of activities;
 Learning frequent user pathways to navigate under differential conditions (such as; construction, weather, unusual variances in terrain)
Method details -
Method for cognitive UI to prompt user with haptic or voice feedback to redirect or guide the user’s course, to notify, or take alternative action. This is tied to the system sharing of environmental inputs with multiple devices for preferred feedback. (eg. share knowledge with wearables such as intelligent watch, airpods, eyewear)
•System learns about environment via camera with user and possibly connected cameras at location;
•If objects are in users path, the system prompts user with voice or haptic feedback to move right or left, slow down, stop;
•Crosswalks and crowds are identified to the user to let them know what to expect, Walk and stop at crosswalks are called out to the user via Cognitive UI voice or haptic.

Method for cognitive system to learn from past activities, routes by the user, and based upon existing location, activities, and routes, the system will match the same and match historical data with the view from the user’s wearable camera to identify and differences in the terrain and communicate this discrepancy back to the user  via cognitive UI voice or haptic sensor.
•Cognitive system learns routes, preferred paths to walk with camera input and GPS navigation stems to analyze;
•Cognitive system identifies type o activities: running, walking, passenger on bus, etc;
•When user engages in same activity and route, system matches the terrain and identifies any changes;
•Changes and variances are communicated to the user vis Cognitive UI with voice or haptic feedback.

Method for user set preferences to establish location parameters as a safety zone via voice UI and for system to alert when in violation or to close to set parameters/boundaries.
ïUser sets preferences to establish ‘mode types’ by setting inputs via voice command;
ïCameras are used y system to analyze environment
ï System provides feedback to the user about preferences of zone and prompts voice acknowledgement of settings. (for example: user to stay within 10 foot radius of machinery to assemble cars).

Method for system to learn safety zones for user based on internal and external environments. System to establish recommendation of zones via connected cameras, beacons, and geo fence technologies. Camera input analyzed by system for physical objects.
•System can learn about environment via camera, analyzing physical movements and repetitive behavior to where safety zones should be set;
•Moving objects (such as fork lifts, machinery, traffic) are noted as stay away areas;
•iOT broadcast for temperature warnings or chemical danger items;
•Possible microphones for noise which could alter communication from audio to haptic;
•Alerts and warnings are communicated to user via Cognitive UI with voice or haptic if user goes near safety zone parameters or unexpected object infiltrates zone.
Method for cognitive system to learn user behavior and provide cognitive UI recommendations to user for optimizing timing of activities and routes. (eg. go to the grocery store, perform certain tasks at work) 
•System learn movements and timing of activities via GPS connectivity, camera, analyzing physical movements, and repetitive routes;
•System to conduct traffic and congestion check via connected cameras, GPS traffic prediction;
•System prompts user to adjust timing or optimize timing for specific activity based on findings via Cognitive UI.
Method for system to learn repetitive behavior in industrial setting to help guide for purposeful object placement, equipment handling, and navigation within a specific space. System uses cognitive UI of user’s preference to provide feedback on how to best adjust behavior. 
•User can set preferences to communicate ‘work mode’ with previously set inputs;
•Or system can learn movements via camera, analyzing physical movements and repetitive behavior to note environment details and identify variances more easily;
•With voice or haptic feedback, the system communicates to the user of any variances or adjustments in movement or navigation
Industry Usage
Prior Art Search
Keywords:

Vision Assisted Devices
Cognitive Eyewear
Connected Eyewear
Accessible Eyewear
Smart Glasses
Blind Aid Device
Patents:
US7855657B2 - Device for communicating environmental information to a visually impaired person

Abstract -
An aid for a blind person (1), includes a distance sensor (3), which creates a distance image of an object (2). The distance information that is generated by the distance sensor (3) is transmitted to a tactile matrix (10), which is integrated into a guide stick (11). The blind person (1) obtains information about his or her environment by touching the tactile matrix (10)

US20110092249A1 - Portable Blind Aid Device
Abstract -
A blind aid device including enabling a blind person to activate the blind aid device; capturing one or more images related to a blind person’s surrounding environment; detecting moving objects from the one or more images captured; identifying a finite number of spatial relationships related to the moving objects; analyzing the one or more images within the blind aid device to classify the finite number of spatial relationships related to the moving objects corresponding to predefined moving object data; converting select spatial relationship information related to the one or more analyzed images into audible information; relaying select audible information to the blind person; and notifying the blind person of one or more occurrences predetermined by the blind person as actionable occurrences.
Back to Top