WHAT IS RESPONSIBLE AI?
Responsible AI you can trust.
AI creates limitless possibilities for innovation – and with that comes the responsibility to develop and use this technology in a safe, ethical way. See what we’re doing to build trust and transparency for a better future.
OUR APPROACH
The responsible AI pillars that we hold true.
KEY PRACTICES
Developing responsibly, at every step.
Our principles give us a solid foundation for our approach. But we don’t stop there – we put them into practice at every step of development. Learn more about some of our key practices.
Advocating for thoughtful frameworks.
We’re active in the development of leading frameworks and regulations such as the US National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and the European Union’s AI Act.
Designing for responsible AI.
We consider the potential for unintended consequences throughout the development and build of our products and keep safety and security in mind. That means guardrails to ensure fairness, transparency, explainability, reliability and more.
Providing our customers with visibility.
To help customers enable responsible AI within their own organisations, we explain how our AI solutions are built, how they work, and how they are trained and tested. Fact sheets, including descriptions of relevant risk evaluations and mitigations, are available for all customers.
“Transparency around how AI and ML models are trained is key to establishing trust. Systems that lack the sophistication to support that will struggle. Workday has the resources and brainpower to push all of us further ahead.”
– SVP, Chief Information Officer
WHAT’S AHEAD
Shaping a fair, transparent future for AI.
As the technology landscape evolves, so does our work in advancing the responsible use of AI. We look forward to uncovering even more innovative use cases while ensuring fairness and transparency for all.
Expanding our advisory board.
We’re bringing in more perspectives from different disciplines and areas of expertise.
Increasing investment.
We continue to explore ongoing responsible AI training and opportunities for collaboration with customers, partners and legislators.
Partnering with our customers.
We are continuously working with customers to find more opportunities to enable their responsible deployment of AI.
ARTIFICIAL INTELLIGENCE REPORT
Closing the AI trust gap.
Leaders and employees agree that AI presents many business opportunities – but the lack of trust that it will be deployed responsibly creates a barrier. The solution? Collaboration and regulation.
62%
of leaders welcome AI adoption in their organisation.
23%
of employees think their employer might put their own interests first when adopting AI.
70%
of business leaders believe AI should allow for human intervention.
4 in 5
workers say their company does not communicate AI usage guidelines.
Our approach to trust.