A man and a woman working on a laptop and the text: Introducing Canva Shield
  1. Introducing Canva Shield: Safe, fair and secure AI

Introducing Canva Shield: Safe, fair and secure AI

The new era of artificial intelligence is supercharging our mission to empower the world to design – but above all else, AI should be safe, fair, and inclusive. That’s why we’ve developed Canva Shield, an industry-leading approach to ensuring our products are safe, fair and secure.


As a global platform used by more than 150 million people, we’re committed to building the world's most trusted platform – including through the ways we use artificial intelligence. We believe people shouldn’t need complex software or expert design knowledge to unlock their creativity, and our new suite of AI tools as part of Magic Studio(opens in a new tab or window) takes our mission of empowering the world to design even further.

A montage of Canva Magic Studio AI tools

As well as today’s launch of Magic Studio(opens in a new tab or window), we’re excited to unveil an important step in our trust and safety journey with the introduction of Canva Shield(opens in a new tab or window): our new leading collection of robust trust, safety, and privacy tools designed to ensure you’re in control.

Whether you’re using Canva as an individual creator or enterprise organization, Canva Shield means you can design with the peace of mind that you’re in safe hands. Let’s take a look.

A team image of three females and two males sitting around a table with a laptop using Canva

Introducing enterprise indemnification

For additional peace of mind when designing using our Magic Studio products, we’re providing indemnification to eligible enterprise customers at no additional cost. This means you can design at ease knowing we’ve got your back in the rare event of an intellectual property claim stemming from content you’ve designed with our Magic Studio products.

Robust trust and safety tools

One of our guiding values is ‘be a force for good’(opens in a new tab or window)– which means keeping our community safe is paramount. Our dedicated Trust and Safety team is laser-focused on maintaining a safe and secure environment for everyone creating content with Canva. We invest heavily in our commitment to Trust and Safety through a number of initiatives, including:

  • All our generative AI products including Canva Assistant, AI image and video generator Magic Media, copywriting assistant Magic Write(opens in a new tab or window), and photo editor Magic Edit(opens in a new tab or window) go through rigorous safety reviews as part of our product development process.
  • We automatically moderate certain input prompts to identify and prevent inputs that might generate inappropriate or unwanted content. This includes medical, political, hateful or explicit topics.
  • We also automatically moderate generated outputs with machine learning technology to scan for rare cases of inappropriate, unsafe, or harmful content before the result is made visible.
  • We’ve created a feedback loop that gives our community the opportunity to report any rare issues, such as edge cases that result in anything unexpected being generated.
The Canva Community can provide feedback on our AI tools including Magic Media via a report function

A steadfast commitment to Creators

Creators have been the cornerstone of Canva for the last decade and continue to play a critical role in our mission to empower the world to design.

We're committed to not training our proprietary AI models on Canva Creator(opens in a new tab or window) content without their explicit permission. We're putting proactive consent in place, which means Canva Creators can opt out of having their designs used to train our AI models at any time.

A collage of different diverse images from our Canva creator community.

Our creator community is at the heart of Canva.

We’re also committing $200 million in content and AI royalties to be paid to our creator community over the next three years. The Creator Compensation Program will pay Canva Creators who consent to have their content used to train our proprietary AI models.

Striving for diversity and fairness

We want to make our new AI-powered features truly inclusive and representative of the diversity of our community. To do this, we’re developing technology and processes to achieve more diverse outputs that help to serve our community in the best way possible. Feedback from our community is vital to this balance. The more feedback we receive, the more we can continue to refine our model.

As this technology continues to develop, we’re also committed to reducing potential harm from AI outputs through debiasing. To begin making inroads here, we’re developing a human-centric approach to improve fairness, starting with the images generated by our Text to Image product. While our work in this space will never be complete, we’re sharing our approach, early findings, and lessons with the community to help contribute to safe, equitable, and responsible technology at large. We’ll soon be open-sourcing the bias steering model we’ve developed in-house to help other companies build more inclusive representation in generative AI tools.

Advanced privacy controls

We believe you should have a say in how your data is collected and used. That’s why we’ve introduced new transparent privacy options to ensure you’re in control.

We won't train on your private content without permission. By default, all Canva users are opted out of private design content, such as the text in your designs, from being used to train our proprietary AI models. You can also opt-out at any time from sharing general information with us, such as your search queries or the fonts you like using. This information helps us to build a better and more personalized product experience, but is entirely optional and up to you.

Individual users can update both of these preferences at any time via your Privacy Settings(opens in a new tab or window), and your access and ability to use our products won’t change either way.

Built for the workplace

Want some people on your team to access Magic Studio features, but not others? It’s easy to manage your team's access to the AI-powered tools available using our new team administrator controls.

Team administrators have full control over how Magic Studio products are enabled and used across the workplace and can toggle these features based on employee roles at any time.

Designing a safe, fair and secure future

We’re incredibly excited about the launch of both Magic Studio and Canva Shield – we can’t wait to see all of the ways this new technology helps you to supercharge your creativity and achieve your goals.

As this technology continues to evolve, we’re looking forward to continuing to to open source and share our learnings back out into the world to help contribute to a world of safe and responsible AI.

You’re in safe hands with Canva Shield. Find out more here(opens in a new tab or window).


Blue and purple banner saying Join the Canva Newsroom community

Stay in the loop with our monthly newsletter! From the latest news and announcements, to stories and insights, join our community for updates delivered straight to your inbox.

Sign up