Building a shared reporting experience for the machine learning lifecycle
MY ROLE
I was the design lead, responsible for the end-to-end design process
TEAM
Manuel Bähr, Engineering Manager
Nils Braun, Software Engineer
Mats Pörschke, Software Engineer
Jake Konstantinos, Software Engineer
VERSION
v1.0

The AIML organization faced growing complexity in knowledge sharing, collaboration, and decision-making. Scattered tools and processes made it difficult to analyze ML workflows cohesively, resulting in duplicated efforts, delayed insights, and siloed knowledge.
The goal was to build a centralized, flexible, browser-based reporting solution that allows teams to make better decisions through improved data visibility and analysis capabilities
At the outset of the project, I did not have a clear mission, and with no pre-existing insights, I partnered with the DRI to explore how users use reporting in their workflows.
I talked to machine learning engineers from machine translation, GenAI music, video engineering, and foundation models teams to understand their current challenges. Additionally, I conducted competitive analysis, watched product demos and tutorials, and learned how other products implemented reporting and analytics.
Key insights:
“My preference is always to leverage what is inside. It's much simpler in terms of billing and approval and all that kind of stuff and it makes our training ecosystem better. Which is super important to me, because my efficiency is always gonna be limited by the infrastructure that we have in house. The better the infrastructure is in house, the easier the job is for me, and the more I can concentrate on where really my value is.”
~ ML Engineering Manager
How might we easily create, share, and access interactive data visualizations and metrics so teams can effectively and efficiently analyze their machine learning workflows?
ML Reports is a comprehensive web-based ML reporting tool designed to streamline the creation and sharing of ML insights across the AIML organization.
The platform's dual-interface approach accommodates users of varying technical needs. A web-based intuitive editor for direct in-browser creation and modification, and a Python SDK for advanced customization. All reports are seamlessly integrated with ML Hub projects, ensuring team-wide access to insights. To promote collaboration and reduce duplicate efforts, reports can be shared at multiple levels – from individual team members to the broader Apple ML community.
One of the biggest challenges I faced throughout this project was balancing moving forward with designs while navigating ambiguity. I was simultaneously creating mockups, doing desk research, and working with the team on the overall vision and strategy for ML Reports.
Developing product principles and feature prioritization helped drive the process forward.
With information from discovery work, I developed the following product principles:
Collaborative Intelligence
Enhance collaboration through effortless sharing of insights, visualizations, and analyses across teams.
Efficiency Through Standardization
Streamline repetitive analysis tasks, allowing teams to focus on interpretation and discovery.
Seamless Exploration
Frictionless movement between datasets, experiments, and evaluations.
Evidence-Based Decision Making
Provide structured data and analysis tools for confident decisions about model readiness.
Clarity Through Visibility
Surface comprehensive metrics across the entire ML lifecycle through intuitive visualizations.
With features identified for the MVP, I jumped into the design process. Thinking about how a user gets started creating a report was surprisingly challenging. For the MVP, the team established that users would view and create reports from the Reports page in ML Hub.
I further explored the toolbar placement and what the empty state would look like. Here, users are presented with their options for report content, and while it was ok and encouraged to create new patterns and components, since this was a new feature and experience for ML Hub, I thought it could perhaps be more aligned with other Apple experiences.
The final version is more aligned with other Apple getting started expereinces. Users can take a product tour to familiarize themselves with the features and view documentation to help them get started. The toolbar is fixed at the top to allow for maximum screen real estate and feels familiar, like other document apps. Instead of having an edit state, as in the previous versions, there is a context switcher. For a user creating a report, they are always in the context of the “report builder” and can easily toggle to the preview state before publishing or sharing.
Collaborative Intelligence
All reports are seamlessly integrated with ML Hub projects. To directly address the organizational challenges of collaboration and duplication, reports can be shared at multiple levels – from individual team members to the broader Apple ML community.
Efficiency Through Standardization
Teams now have a standardized, yet flexible solution for exploring and analyzing their data.
Seamless Exploration
Centralized solutions enable teams to focus on their work rather than the infrastructure. With reporting integrated into the ML platform, analysis is seamless rather than another step or workflow.
Clarity Through Visibility
Establishing best practices and transparency in the process enables teams to make better decisions.
Now that the private beta launch was behind us and we had some breathing room, I took the opportunity to validate decisions and get feedback regarding the private beta release of ML Reports. I conducted six semi-moderated interviews with early adopters.
Key insights:
Unclear reporting capabilities prohibit adoption
Pain-points:
Unclear reporting capabilities were causing underutilization of resources. Users' uncertainty about available options means teams may be using suboptimal approaches, failing to leverage the full potential of reports, or abandoning the feature altogether.
Explorations of the getting-started experience
Reporting needs to be more integrated into the platform
Pain-points:
While it made sense to start with a main reports page where users could create a report for the MVP, we needed to better integrate reporting into the ML platform. Meeting users where they are is essential to creating a product that is accessible and usable.
Reports should be accessible to both technical and non-technical users
Pain-points:
Cross-functional accessibility barriers limited report adoption. The MVP was not designed for non-technical stakeholders, which resulted in knowledge silos that prevent effective decision-making across departments. Technical teams may create reports that business leaders can't interpret, while non-technical users avoid engaging with data that could inform strategic decisions.
Users want to have more control over their report layout
Pain-points:
When users would add a new component, it would add it to the bottom of the report. If that was not the intended location for the component, they would have to move it, resulting in too much time spent on report organization.
“We've been using the MLHub Reports feature for a few weeks now to track our scheduled jobs and it has proven very useful. It was much easier to get up and running with this than the alternatives (custom web page, Tableau, etc.) Thanks!”
~ ML engineer
We met or exceeded all deadlines and we were able to onboard an additional two teams from our original goal of three.
Selected Works
KlevaHealthHealthcare Website
OTC MarketsMarket Data Website
InvestorVisionEnterprise Software
BrightAIEnterprise Software
AT&T Business VoiceEnterprise Software
© 2025 Ricki Jaeckel