AI and Ethics

Exploring Responsible AI

Our Experts

Isabell Neubert

Subject Matter Expert AI Ethics Detecon

Dive into the world of AI with Isabell Neubert, an AI Ethics Expert at Detecon. Explore the potential of AI, the importance of responsible AI, and the role of the EU AI Act. Discover how to navigate the complex landscape of AI for a safer, more ethical future. Read More

AI and beyond

Ethical Implications Explored at Davos

Artificial intelligence, real impact, serious need for rules

The aspect of AI and Beyond took center stage at the World Economic Forum in Davos, where business leaders discussed the ethical implications of this transformative technology. Privacy, bias, transparency, and collaboration emerged as key concerns.

Participants emphasized the need for stringent privacy regulations to protect personal data while striking a balance with data-driven advancements. The issue of bias in AI systems was addressed, stressing the importance of diverse and inclusive data sets to avoid discrimination.

Transparent and explainable algorithms were deemed essential for fostering trust and accountability. Collaboration between governments, businesses, and academia was seen as critical for establishing international standards and regulations to ensure responsible AI development and deployment. These discussions emphasized the need for ethical considerations to guide AI’s impact on society and individuals.

Digital Ethics Framework

As a result of the AI Act, Artificial Intelligence is becoming an even greater focus of data protection and the misuse of personal rights through this technology. But who actually protects employees in companies from unintentionally causing harm through the use of AI? Although guidelines provide the framework for the application of ethics in AI, it remains challenging to translate these into procedural instructions within the company.

We guide companies with developing a tailored approach to combine the latest technologies with ethical standards. Our framework encompasses all stakeholders in the ethical AI journey, safeguarding reputation and liability while bolstering business engagement in an age where corporate values significantly shape customer trust.

Central to our philosophy are collaboration and transparency, guiding businesses towards AI solutions that are not only innovative but also responsible and trustworthy.

Our structured framework is categorized into three core areas:

This includes a strong commitment to human rights, ensuring non-manipulation, and assessing the impact of AI on structures, the environment, and systems.
We focus on liability prevention and guaranteeing non-discriminatory practices, aligning with legal standards.
Our approach ensures that AI solutions are robust, accurate, and fortified with comprehensive cybersecurity measures.
AI and beyond AI and beyond people scaled e1701777749236
Interview:

Isabell Neubert:


A Deep Dive into AI Ethics at Detecon

Isabell Neubert, an expert in Artificial Intelligence at Detecon, shares her insights on the potential and the ethical implications of AI.

Not long ago, there was an image of Ukrainian President Zelensky waving white flags of surrender. What has this got to do with your work?

"Ukraine never surrendered, and this was one of the first viral hits of an artificial intelligence creating images that were fake, but if trusted, could do a lot of harm. It was the same with Pope Francis when he went viral in his luxurious white winter jacket."

Is artificial intelligence prone to be misused?

"I prefer to look at it from a completely different angle: AI is capable of producing output of such superb quality that the world is entering a stage of almost unlimited potential right now. Just imagine what great ideas AI could generate in the fields of climate protection or medical treatment, even disease prevention. However, AI is such a powerful technology that we need strong regulation and consensus on how to use it to benefit."

You refer to Responsible AI. Why is it so important for economies and societies to pay attention to this discipline?

"Everybody is talking about AI these days, and companies fear that if you don’t jump onboard you will become irrelevant very soon. And it’s true! Since the launch of ChatGPT, everybody had the opportunity to see for themselves just how much potential AI unveils. Decision makers don’t think any differently. However, if things go wrong, the potential to do harm is equally huge, and that’s something that needs to be adressed."

Will there be any certificate given out to companies so that users can trust a specific technology?

"Not yet, but quite frankly: The EU AI Act is just being progressed into law. Regulation is key to success when people and companies wish to benefit from AI. Violations will be prosecuted, and that’s good news for AI. This is not only true for the EU, but many countries across the world are currently working on developing such rules. For example, US President Joe Biden issued an executive order on the safety, security and trustworthiness of AI in autumn 2023."

What can I do myself to keep control?

"At first, everyone should know AI basics and be informed of decisions being made about them, where AI is used. As second, as strange as it may sound, but recently AI is being increasingly used to fight the downsides of AI. Take the Nightingale app: It will put a watermark on any images that you upload to social media so that they cannot be used to train an AI that attempts to download images from web sources."

How do you sleep at night given the existence of such technologies with unmatched power?

"Really good, seriously! For one simple reason: I know what’s possible with AI, so I can ask the right questions and improve it where it fails at the moment, and I’m looking forward to a new chapter in which we can find solutions to some of mankind’s biggest problems."

How does the EU AI Act come into play here?

"It’s the most relevant legislation in the world today that describes the rules to follow if you wish to leverage AI technology. For example, you are not entitled to launch any tools based on AI if you didn´t check on the risk level first and present a risk mitigation plan according the this level. This is critical since the people behind, in a company, have different levels of responsibility: Data scientists develop technologies, but are not liable – and often not included into the strategic planning of the use case. Product owners are responsible for the quality outcome, but often cannot fully foresee all the risks and model implications. And users don’t know if an AI has been used for a specific decision, how a specific AI works or what data are being used, so they need to trust. All in all, the EU AI Act aims to provide transparency on the whole value chain and thus reduce risks that AI technologies are being misused."

Related Content

Need to know about the first-ever comprehensive legal framework on Artificial Intelligence worldwide.

  • Regulatory Standards

    The EU AI Act leads in setting strong standards for the ethical development and deployment of AI. Targeting high-risk applications, it ensures safety and accountability.

  • Human and Ethics

    The act emphasizes human oversight and ethical considerations in AI, balancing technological advancements with transparency and fairness.

  • Global Benchmark

    The EU AI Act serves as a global benchmark for AI regulation, driving AI governance discussions and setting precedents worldwide.

Shopping Basket

Get in touch

with our Expert, write us your request

Leave your email and a message directly to Steffen Kuhn:

Get in touch

with our Expert, write us your request

Leave your email and a message directly to Prof. Dr. Philipp Hacker:

Get in touch

with our Expert, write us your request

Leave your email and a message directly to Marcus Berlin:

Get in touch

with our Expert, write us your request

Leave your email and a message directly to Isabell Neubert:

Get in touch

with our Expert, write us your request

Leave your email and a message directly to our expert: