Objective

Objective

Objective

The U.S. Navy is revamping its battle-space visualization technology and contracted us to design and build a prototype human machine interface for next generation 3D displays. The project requirements include: collaborative, require no headwear such as VR or AR goggles, and multiple varying input capabilities.

Skills

Skills

Skills

Hardware Development

  • Human factors research

  • Concept design

  • Prototyping

  • 3D modeling

Software Development

  • User interviews

  • Information architecture

  • Prototyping

  • Wireframing

  • User flows

  • Communication & presentation

Role

Role

Role

Principal investigator that lead a team of 3 hardware and software engineers in research, design, and development of an in-the-round collaborative 3-person workstation.

My team and I designed and built a functional prototype in 12 months. We update the test application iteratively as the prototype is currently being tested and evaluated at the Naval Postgraduate School.

Solution

Solution

Solution

We began by reviewing previous research the Navy conducted around using tactical situational awareness displays to reduce the cognitive burden on the warfighter. Then, we collaborated with the MOVES group at the Naval Postgraduate school to prepare and conduct multiple interviews with subject matter experts and end users to gain insight into current visualization systems, the cognitive tasks end users were performing, and the pain points they ran into when collaborating on a complex task.

In parallel with the Naval research, we looked in depth at the state of art in human machine interfaces being designed at companies like Space-X, Apple, Google, Tesla, and General Motors. Research included best practices for UI layout, information hierarchy, touch interaction, collaborative functionality, and ergonomics. This helped us to design a collaborative and immersive UI/UX experience.

Impact

Impact

Impact

  • Initial tests show participants reported being 2x - 3x more immersed in collaborative activities like mission planning when using the PRISM workstation.

  • Tests will conclude at the end of Q2 of 2024.

  • Stakeholders have been so impressed with the work that two additional research and development projects are in contract negotiations to continue development of the PRISM workstation.

Project Highlights

Project Highlights

Project Highlights

Hardware Development:

We explored multiple designs for potential collaborative battle space workstations that would emulate large scale, collaborative 3D displays as these large displays do not currently exist. We decided on two initial concepts to present to the Navy based on the project requirements.

The Navy expressed interest in the "in-the-round' collaboration of the battlespace command display but said the operators would need to be seated if this technology was to ever be installed on a ship. We went back to the drawing board and explored various design solutions eventually having a "eureka" moment.

We realized we could position multiple state of the art large scale gaming monitors in the round, recess them into a shared work surface for over the top collaboration, and then render three unique views into the same shared 3D scene, thus emulating the collaborative ergonomic of holographic or field of light displays.

The vision for PRISM was born.

Diagrams were then drawn up, our mechanical engineer created a CAD model, and two prototype PRISM workstations were fabricated. One staying here in Austin, TX and the other being delivered to the Naval Postgraduate school in Monterey, CA.

Software Development:

Pain Points:

  • Interfaces are outdated, dense, and unintuitive.

  • Complete lack of 3D visualization component.

  • Collaboration is difficult due to orientation and footprint of displays/workstations.

  • Deconflicting priority of threats is cognitively challenging in even "relatively" simple scenarios.

Solution:

  • Create a 3D geospatial volume where all sensor information is fused into a shared space.

  • Render information visualizations about type, location, status, and capability of critical assets in the scene.

  • Utilize systematized doctrinal rules of engagement to populate a list of threat cards ranked in priority.

  • Use same system to populate doctrine cards with suggested tactics, techniques, and procedures.

We generated user requirements and user interface documents to guide and track the development of the BattleView application. These documents guided six iterative design and development cycles each lasting two months. During these cycles the UI and user requirements documents guided our work while we generated wireframes, scenario information/graphics, 3D asset models, and 3D information visualizations. All of this work would but updated in our most recent build at the end of each cycle.

We focused design and development on an immersive full screen view for search and scan…

as well as a more task oriented view complete with threat and friendly data drawers, a tactical timeline, and critical alerts.

Early on we decided to develop a primarily touch screen interaction for multiple reasons:

  • Discoverability - people are intimately familiar with touch interfaces and enjoy using them.

  • Modularity - the UI can be customized to fit any task or function.

  • Evergreen - the system can be updated with new roles, techniques, or procedures.

  • Industry - industry leaders are increasingly integrating touch interfaces into the HMI.

The latest update of the UI features (on the wide screen) a centralized, large 3D scenario view flanked on either side with threat cards and doctrine cards. These are referred to as the "red" and "blue" panels as they are tabbed panels containing multiple pages of information neatly organized into cards about friendly and hostile assets. At the top of the screen there is a tactical timeline for temporal information associated with crucial events. The bottom of the screen shows critical alerts for both friendly and threat assets.

The touch interface provides a massive track pad for manipulating the 3D scene as well as cycling through the various tabs in the blue and red panels. The top half of the UI allows operators to toggle overlays on and off, adjust camera controls, select assets, and filter their view.

Conclusion

Conclusion

Conclusion

Like all worthy tasks, leading this project was challenging and rewarding. There was so much wisdom gained and I look forward to seeing what the PRISM does next.


Most of all though, I look forward to seeing how the concept of collaborating in-the-round continues to grow and evolve along with technology.

Design by Aaron Harlan 2024

Austin, TX