Evaluating Large Language Models goes beyond typical software UX due to their unique nature and capabilities. With new models emerging frequently, it is crucial to employ UX heuristics in their evaluation. This approach helps in making informed decisions by emphasizing which model aligns best with user requirements.
In this study, I evaluated ChatGPT 5 and Gemini 2.5 pro using a set of modified heuristics namely, clarity, match between system and real world, user control and freedom, error prevention, help and guidance, aesthetics, context preservation, and trustworthiness. [1] No prompt engineering guidelines were used as the study focused on average user base that may or may not implement prompt engineering.
Completed for a graduate class, I conducted usability testing of the learning management system (LMS) Canvas. In 2014, Missouri University of Science and Technology (S&T) was looking to replace the current LMS. And our professor, Dr. Wright suggested we conduct usability testing of the system before implementation.
Project requirements: Although S&T had already signed the contract with Canvas, our study could recommend customization that would be suitable for the target users of Canvas at S&T. I designed the test plan and conducted the usability testing. Testing was conduct on campus, in person. I recruited five participants that fit the target audience for the student use of Canvas.
In this study, the National Park Service website was evaluated. As a user experience researcher, I will built the test plan, conducted competitive analysis, card sorting, and usability testing of the website.