Towards LLM-based Agents for Personalized App Review
Bachelor & Master Thesis
With the rapid growth of mobile apps and an expanding user base, user-centered design and development have become increasingly crucial [1]. Ensuring app usability and optimizing the user experience for diverse user groups is now a key competitive factor in the industry.
The rise of large language models (LLMs) has introduced new possibilities for mobile app GUI testing, allowing natural language-driven interactions to explore apps and automatically generate test cases [2]. However, most LLM-powered exploratory GUI testing approaches lack a true user perspective on usability. Currently, app store user reviews provide developers with valuable feedback on bugs and user experience issues, helping them refine their apps [3]. However, there remains a gap: an effective and low-cost solution to comprehensively test and evaluate usability before an app is released.
This project aims to leverage LLMs and related technologies to simulate real user interactions with mobile apps and generate personalized, user-centric app reviews. By mimicking authentic user perspectives, we want to explore the capability to efficiently generate usability and UX feedback, enabling a more effective user-centered app design and development.
Required knowledge:
Python programming; understanding of LLMs and relevant technologies; understanding of automated GUI testing techniques for mobile apps
What you will do may include:
- Develop an app review agent based on the LLM using open source frameworks.
- Enhance the agent's workflow through combined techniques, such as prompt engineering, knowledge graph, RAG, etc.
- Construct mobile app datasets, design experiments, and perform comprehensive evaluation and analysis of the developed agent.
For more details about this topic, please contact Dr. Shengcheng Yu (shengcheng.yu[at]tum.de).
Reference:
[1] Indrani Medhi. User-centered design for development. Interactions. Volume 14, Issue 4, (2007), 12–14.
[2] Zhe Liu, Chunyang Chen, Junjie Wang, Mengzhuo Chen, Boyu Wu, Xing Che, Dandan Wang, and Qing Wang. Make LLM a Testing Expert: Bringing Human-like Interaction to Mobile GUI Testing via Functionality-aware Decisions. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE '24) (pp. 1-13).
[3] Rajesh Vasa, Leonard Hoon, Kon Mouzakis, and Akihiro Noguchi. 2012. A preliminary analysis of mobile app user reviews. In Proceedings of the 24th Australian Computer-Human Interaction Conference (OzCHI '12). 241–244.