No items found.

AI and Law

New generations of AI systems are currently being deployed in real-time. These models offer immense opportunities, particularly in Generative AI, and are already changing how we work, communicate, and live. At the same time, the associated risks must be managed through appropriate regulation. The legal framework for AI should strike a balanced relationship between mitigating these risks and encouraging the practical application of AI models that benefit society. Does the AI Act meet these requirements?

New generations of AI systems are currently being deployed in real-time. These models offer immense opportunities, particularly in Generative AI, and are already changing how we work, communicate, and live. At the same time, the associated risks must be managed through appropriate regulation. The legal framework for AI should strike a balanced relationship between mitigating these risks and encouraging the practical application of AI models that benefit society. Does the AI Act meet these requirements?

New generations of AI systems are currently being deployed in real-time. These models offer immense opportunities, particularly in Generative AI, and are already changing how we work, communicate, and live. At the same time, the associated risks must be managed through appropriate regulation. The legal framework for AI should strike a balanced relationship between mitigating these risks and encouraging the practical application of AI models that benefit society. Does the AI Act meet these requirements?

What Is the EU AI Act?

The AI Act is a regulatory framework that establishes EU-wide minimum requirements for AI systems, following a risk-based approach: the higher the risk, the stricter the obligations. Providers, importers, distributors, and operators of high-risk AI systems must meet various requirements. While AI systems posing unacceptable risks are wholly banned, and high-risk AI systems are subject to stringent technical and organizational requirements, applications with lower risks are only subject to certain transparency and information duties. There will also be specific rules for Generative AI, namely general-purpose AI models, including those that generate content such as texts and images.

What Is the EU AI Act?

The AI Act is a regulatory framework that establishes EU-wide minimum requirements for AI systems, following a risk-based approach: the higher the risk, the stricter the obligations. Providers, importers, distributors, and operators of high-risk AI systems must meet various requirements. While AI systems posing unacceptable risks are wholly banned, and high-risk AI systems are subject to stringent technical and organizational requirements, applications with lower risks are only subject to certain transparency and information duties. There will also be specific rules for Generative AI, namely general-purpose AI models, including those that generate content such as texts and images.

What Is the EU AI Act?

The AI Act is a regulatory framework that establishes EU-wide minimum requirements for AI systems, following a risk-based approach: the higher the risk, the stricter the obligations. Providers, importers, distributors, and operators of high-risk AI systems must meet various requirements. While AI systems posing unacceptable risks are wholly banned, and high-risk AI systems are subject to stringent technical and organizational requirements, applications with lower risks are only subject to certain transparency and information duties. There will also be specific rules for Generative AI, namely general-purpose AI models, including those that generate content such as texts and images.

What Is the EU AI Act?

The AI Act is a regulatory framework that establishes EU-wide minimum requirements for AI systems, following a risk-based approach: the higher the risk, the stricter the obligations. Providers, importers, distributors, and operators of high-risk AI systems must meet various requirements. While AI systems posing unacceptable risks are wholly banned, and high-risk AI systems are subject to stringent technical and organizational requirements, applications with lower risks are only subject to certain transparency and information duties. There will also be specific rules for Generative AI, namely general-purpose AI models, including those that generate content such as texts and images.

What Is the EU AI Act?

The AI Act is a regulatory framework that establishes EU-wide minimum requirements for AI systems, following a risk-based approach: the higher the risk, the stricter the obligations. Providers, importers, distributors, and operators of high-risk AI systems must meet various requirements. While AI systems posing unacceptable risks are wholly banned, and high-risk AI systems are subject to stringent technical and organizational requirements, applications with lower risks are only subject to certain transparency and information duties. There will also be specific rules for Generative AI, namely general-purpose AI models, including those that generate content such as texts and images.

How Are Risks Defined, and What Obligations Arise From These Classifications?

AI systems and applications that pose a threat or disadvantage to individuals fall into the category of unacceptably high risk, which leads to a decision to ban them. The list of AI practices prohibited under Article 5 of the AI Act includes applications that violate European values by infringing on fundamental rights, thus presenting an unacceptable risk to affected individuals. These range from state assessment systems (social scoring) to voice-controlled toys that could promote potentially dangerous behavior through cognitive behavioral manipulation.

We speak of high-risk applications when they can potentially pose significant dangers to health, safety, or the fundamental rights of individuals. Examples include autonomous vehicles, software for CV sorting in recruitment processes, or creditworthiness checks that could deny individuals access to loans. Similar risks exist in assessing the reliability of evidence in law enforcement or in the automated processing of visa applications. Providers of such systems must implement, among other things, a quality management system that ensures compliance with the AI regulation, as well as a risk management system that covers the entire lifecycle of a high-risk AI system. In addition, detailed technical documentation about the AI system must be created. The AI Act also sets specific technical requirements for high-risk AI systems, such as the logging of operations and events to ensure the traceability of the system's functionality. If data are used for training the model, the training, validation, and test datasets must meet the requirements of Article 10 of the AI Regulation.

Limited risk exists in the lack of transparency in the use of AI. The AI Act introduces specific transparency obligations. For example, users must be informed when interacting with AI systems such as chatbots that they are communicating with a machine. Another example is the planned digital watermarks to prevent deception. Operators of AI systems that generate or manipulate image, audio, or video content and could be considered deep fakes are required to disclose that the contents have been artificially created or altered.

How Are Risks Defined, and What Obligations Arise From These Classifications?

AI systems and applications that pose a threat or disadvantage to individuals fall into the category of unacceptably high risk, which leads to a decision to ban them. The list of AI practices prohibited under Article 5 of the AI Act includes applications that violate European values by infringing on fundamental rights, thus presenting an unacceptable risk to affected individuals. These range from state assessment systems (social scoring) to voice-controlled toys that could promote potentially dangerous behavior through cognitive behavioral manipulation.

We speak of high-risk applications when they can potentially pose significant dangers to health, safety, or the fundamental rights of individuals. Examples include autonomous vehicles, software for CV sorting in recruitment processes, or creditworthiness checks that could deny individuals access to loans. Similar risks exist in assessing the reliability of evidence in law enforcement or in the automated processing of visa applications. Providers of such systems must implement, among other things, a quality management system that ensures compliance with the AI regulation, as well as a risk management system that covers the entire lifecycle of a high-risk AI system. In addition, detailed technical documentation about the AI system must be created. The AI Act also sets specific technical requirements for high-risk AI systems, such as the logging of operations and events to ensure the traceability of the system's functionality. If data are used for training the model, the training, validation, and test datasets must meet the requirements of Article 10 of the AI Regulation.

Limited risk exists in the lack of transparency in the use of AI. The AI Act introduces specific transparency obligations. For example, users must be informed when interacting with AI systems such as chatbots that they are communicating with a machine. Another example is the planned digital watermarks to prevent deception. Operators of AI systems that generate or manipulate image, audio, or video content and could be considered deep fakes are required to disclose that the contents have been artificially created or altered.

How Are Risks Defined, and What Obligations Arise From These Classifications?

AI systems and applications that pose a threat or disadvantage to individuals fall into the category of unacceptably high risk, which leads to a decision to ban them. The list of AI practices prohibited under Article 5 of the AI Act includes applications that violate European values by infringing on fundamental rights, thus presenting an unacceptable risk to affected individuals. These range from state assessment systems (social scoring) to voice-controlled toys that could promote potentially dangerous behavior through cognitive behavioral manipulation.

We speak of high-risk applications when they can potentially pose significant dangers to health, safety, or the fundamental rights of individuals. Examples include autonomous vehicles, software for CV sorting in recruitment processes, or creditworthiness checks that could deny individuals access to loans. Similar risks exist in assessing the reliability of evidence in law enforcement or in the automated processing of visa applications. Providers of such systems must implement, among other things, a quality management system that ensures compliance with the AI regulation, as well as a risk management system that covers the entire lifecycle of a high-risk AI system. In addition, detailed technical documentation about the AI system must be created. The AI Act also sets specific technical requirements for high-risk AI systems, such as the logging of operations and events to ensure the traceability of the system's functionality. If data are used for training the model, the training, validation, and test datasets must meet the requirements of Article 10 of the AI Regulation.

Limited risk exists in the lack of transparency in the use of AI. The AI Act introduces specific transparency obligations. For example, users must be informed when interacting with AI systems such as chatbots that they are communicating with a machine. Another example is the planned digital watermarks to prevent deception. Operators of AI systems that generate or manipulate image, audio, or video content and could be considered deep fakes are required to disclose that the contents have been artificially created or altered.

How Are Risks Defined, and What Obligations Arise From These Classifications?

AI systems and applications that pose a threat or disadvantage to individuals fall into the category of unacceptably high risk, which leads to a decision to ban them. The list of AI practices prohibited under Article 5 of the AI Act includes applications that violate European values by infringing on fundamental rights, thus presenting an unacceptable risk to affected individuals. These range from state assessment systems (social scoring) to voice-controlled toys that could promote potentially dangerous behavior through cognitive behavioral manipulation.

We speak of high-risk applications when they can potentially pose significant dangers to health, safety, or the fundamental rights of individuals. Examples include autonomous vehicles, software for CV sorting in recruitment processes, or creditworthiness checks that could deny individuals access to loans. Similar risks exist in assessing the reliability of evidence in law enforcement or in the automated processing of visa applications. Providers of such systems must implement, among other things, a quality management system that ensures compliance with the AI regulation, as well as a risk management system that covers the entire lifecycle of a high-risk AI system. In addition, detailed technical documentation about the AI system must be created. The AI Act also sets specific technical requirements for high-risk AI systems, such as the logging of operations and events to ensure the traceability of the system's functionality. If data are used for training the model, the training, validation, and test datasets must meet the requirements of Article 10 of the AI Regulation.

Limited risk exists in the lack of transparency in the use of AI. The AI Act introduces specific transparency obligations. For example, users must be informed when interacting with AI systems such as chatbots that they are communicating with a machine. Another example is the planned digital watermarks to prevent deception. Operators of AI systems that generate or manipulate image, audio, or video content and could be considered deep fakes are required to disclose that the contents have been artificially created or altered.

How Are Risks Defined, and What Obligations Arise From These Classifications?

AI systems and applications that pose a threat or disadvantage to individuals fall into the category of unacceptably high risk, which leads to a decision to ban them. The list of AI practices prohibited under Article 5 of the AI Act includes applications that violate European values by infringing on fundamental rights, thus presenting an unacceptable risk to affected individuals. These range from state assessment systems (social scoring) to voice-controlled toys that could promote potentially dangerous behavior through cognitive behavioral manipulation.

We speak of high-risk applications when they can potentially pose significant dangers to health, safety, or the fundamental rights of individuals. Examples include autonomous vehicles, software for CV sorting in recruitment processes, or creditworthiness checks that could deny individuals access to loans. Similar risks exist in assessing the reliability of evidence in law enforcement or in the automated processing of visa applications. Providers of such systems must implement, among other things, a quality management system that ensures compliance with the AI regulation, as well as a risk management system that covers the entire lifecycle of a high-risk AI system. In addition, detailed technical documentation about the AI system must be created. The AI Act also sets specific technical requirements for high-risk AI systems, such as the logging of operations and events to ensure the traceability of the system's functionality. If data are used for training the model, the training, validation, and test datasets must meet the requirements of Article 10 of the AI Regulation.

Limited risk exists in the lack of transparency in the use of AI. The AI Act introduces specific transparency obligations. For example, users must be informed when interacting with AI systems such as chatbots that they are communicating with a machine. Another example is the planned digital watermarks to prevent deception. Operators of AI systems that generate or manipulate image, audio, or video content and could be considered deep fakes are required to disclose that the contents have been artificially created or altered.

Controversies and Regulations for General Purpose AI Systems (GPAI)

The regulation of General Purpose AI Models (GPAIs) presents one of the greatest challenges. These systems, flexible enough to be used in a variety of applications, require careful consideration of their potential risks and impacts.

One of the most controversial issues in the negotiations over the AI Act was precisely this regulation of general purpose AI models. GPAIs are advanced AI systems that have been trained with extensive datasets and are capable of performing diverse tasks. Their defining feature is the ability to integrate into many downstream systems or applications. Examples of such GPAIs include ChatGPT, Google's Bard, Meta's LLaMA, and other similar systems.

The regulations of the AI Act, particularly in Articles 52 and following, stipulate that providers of GPAIs must meet special requirements. This includes the provision of technical documentation and detailed information about the training data used. For GPAIs that present a high systemic risk, there are even stricter requirements. These regulations are intended to ensure that the technology is used safely and responsibly and that potential dangers are minimized.

Despite the general regulations, there are specific exemptions for AI systems that are offered under free or open-source licenses; these generally do not fall under the regulation. However, an important exception occurs when GPAIs are classified as high-risk systems. This is the case when they are marketed, integrated into a system, or qualified as systemically important.

The regulation of GPAIs within the AI Act reflects the need to keep pace with the dynamic nature of AI technologies while maintaining public safety and ethical standards. This requires continuous adjustment of the legal framework to keep up with technological developments and provide effective protection.

When Will the AI Act Come Into Effect?

After more than two years of drafting and negotiating the AI Act, EU legislators have finally reached a consensus. On February 2, 2024, the Member States of the European Union unanimously approved the regulation in the Committee of Permanent Representatives. In the plenary session on March 13, 2024, the EU Parliament gave the green light for the law. The AI Act is expected to come into force in May/June 2024 and will be applicable 24 months later, giving companies until 2026 to adjust their measures to meet the new requirements. However, some provisions will apply sooner: the prohibitions of AI systems with an unacceptable risk will take effect after just six months.

For more information, key texts, and much more, visit the EU Artificial Intelligence Act website.

Controversies and Regulations for General Purpose AI Systems (GPAI)

The regulation of General Purpose AI Models (GPAIs) presents one of the greatest challenges. These systems, flexible enough to be used in a variety of applications, require careful consideration of their potential risks and impacts.

One of the most controversial issues in the negotiations over the AI Act was precisely this regulation of general purpose AI models. GPAIs are advanced AI systems that have been trained with extensive datasets and are capable of performing diverse tasks. Their defining feature is the ability to integrate into many downstream systems or applications. Examples of such GPAIs include ChatGPT, Google's Bard, Meta's LLaMA, and other similar systems.

The regulations of the AI Act, particularly in Articles 52 and following, stipulate that providers of GPAIs must meet special requirements. This includes the provision of technical documentation and detailed information about the training data used. For GPAIs that present a high systemic risk, there are even stricter requirements. These regulations are intended to ensure that the technology is used safely and responsibly and that potential dangers are minimized.

Despite the general regulations, there are specific exemptions for AI systems that are offered under free or open-source licenses; these generally do not fall under the regulation. However, an important exception occurs when GPAIs are classified as high-risk systems. This is the case when they are marketed, integrated into a system, or qualified as systemically important.

The regulation of GPAIs within the AI Act reflects the need to keep pace with the dynamic nature of AI technologies while maintaining public safety and ethical standards. This requires continuous adjustment of the legal framework to keep up with technological developments and provide effective protection.

When Will the AI Act Come Into Effect?

After more than two years of drafting and negotiating the AI Act, EU legislators have finally reached a consensus. On February 2, 2024, the Member States of the European Union unanimously approved the regulation in the Committee of Permanent Representatives. In the plenary session on March 13, 2024, the EU Parliament gave the green light for the law. The AI Act is expected to come into force in May/June 2024 and will be applicable 24 months later, giving companies until 2026 to adjust their measures to meet the new requirements. However, some provisions will apply sooner: the prohibitions of AI systems with an unacceptable risk will take effect after just six months.

For more information, key texts, and much more, visit the EU Artificial Intelligence Act website.

Controversies and Regulations for General Purpose AI Systems (GPAI)

The regulation of General Purpose AI Models (GPAIs) presents one of the greatest challenges. These systems, flexible enough to be used in a variety of applications, require careful consideration of their potential risks and impacts.

One of the most controversial issues in the negotiations over the AI Act was precisely this regulation of general purpose AI models. GPAIs are advanced AI systems that have been trained with extensive datasets and are capable of performing diverse tasks. Their defining feature is the ability to integrate into many downstream systems or applications. Examples of such GPAIs include ChatGPT, Google's Bard, Meta's LLaMA, and other similar systems.

The regulations of the AI Act, particularly in Articles 52 and following, stipulate that providers of GPAIs must meet special requirements. This includes the provision of technical documentation and detailed information about the training data used. For GPAIs that present a high systemic risk, there are even stricter requirements. These regulations are intended to ensure that the technology is used safely and responsibly and that potential dangers are minimized.

Despite the general regulations, there are specific exemptions for AI systems that are offered under free or open-source licenses; these generally do not fall under the regulation. However, an important exception occurs when GPAIs are classified as high-risk systems. This is the case when they are marketed, integrated into a system, or qualified as systemically important.

The regulation of GPAIs within the AI Act reflects the need to keep pace with the dynamic nature of AI technologies while maintaining public safety and ethical standards. This requires continuous adjustment of the legal framework to keep up with technological developments and provide effective protection.

When Will the AI Act Come Into Effect?

After more than two years of drafting and negotiating the AI Act, EU legislators have finally reached a consensus. On February 2, 2024, the Member States of the European Union unanimously approved the regulation in the Committee of Permanent Representatives. In the plenary session on March 13, 2024, the EU Parliament gave the green light for the law. The AI Act is expected to come into force in May/June 2024 and will be applicable 24 months later, giving companies until 2026 to adjust their measures to meet the new requirements. However, some provisions will apply sooner: the prohibitions of AI systems with an unacceptable risk will take effect after just six months.

For more information, key texts, and much more, visit the EU Artificial Intelligence Act website.

Controversies and Regulations for General Purpose AI Systems (GPAI)

The regulation of General Purpose AI Models (GPAIs) presents one of the greatest challenges. These systems, flexible enough to be used in a variety of applications, require careful consideration of their potential risks and impacts.

One of the most controversial issues in the negotiations over the AI Act was precisely this regulation of general purpose AI models. GPAIs are advanced AI systems that have been trained with extensive datasets and are capable of performing diverse tasks. Their defining feature is the ability to integrate into many downstream systems or applications. Examples of such GPAIs include ChatGPT, Google's Bard, Meta's LLaMA, and other similar systems.

The regulations of the AI Act, particularly in Articles 52 and following, stipulate that providers of GPAIs must meet special requirements. This includes the provision of technical documentation and detailed information about the training data used. For GPAIs that present a high systemic risk, there are even stricter requirements. These regulations are intended to ensure that the technology is used safely and responsibly and that potential dangers are minimized.

Despite the general regulations, there are specific exemptions for AI systems that are offered under free or open-source licenses; these generally do not fall under the regulation. However, an important exception occurs when GPAIs are classified as high-risk systems. This is the case when they are marketed, integrated into a system, or qualified as systemically important.

The regulation of GPAIs within the AI Act reflects the need to keep pace with the dynamic nature of AI technologies while maintaining public safety and ethical standards. This requires continuous adjustment of the legal framework to keep up with technological developments and provide effective protection.

When Will the AI Act Come Into Effect?

After more than two years of drafting and negotiating the AI Act, EU legislators have finally reached a consensus. On February 2, 2024, the Member States of the European Union unanimously approved the regulation in the Committee of Permanent Representatives. In the plenary session on March 13, 2024, the EU Parliament gave the green light for the law. The AI Act is expected to come into force in May/June 2024 and will be applicable 24 months later, giving companies until 2026 to adjust their measures to meet the new requirements. However, some provisions will apply sooner: the prohibitions of AI systems with an unacceptable risk will take effect after just six months.

For more information, key texts, and much more, visit the EU Artificial Intelligence Act website.

Controversies and Regulations for General Purpose AI Systems (GPAI)

The regulation of General Purpose AI Models (GPAIs) presents one of the greatest challenges. These systems, flexible enough to be used in a variety of applications, require careful consideration of their potential risks and impacts.

One of the most controversial issues in the negotiations over the AI Act was precisely this regulation of general purpose AI models. GPAIs are advanced AI systems that have been trained with extensive datasets and are capable of performing diverse tasks. Their defining feature is the ability to integrate into many downstream systems or applications. Examples of such GPAIs include ChatGPT, Google's Bard, Meta's LLaMA, and other similar systems.

The regulations of the AI Act, particularly in Articles 52 and following, stipulate that providers of GPAIs must meet special requirements. This includes the provision of technical documentation and detailed information about the training data used. For GPAIs that present a high systemic risk, there are even stricter requirements. These regulations are intended to ensure that the technology is used safely and responsibly and that potential dangers are minimized.

Despite the general regulations, there are specific exemptions for AI systems that are offered under free or open-source licenses; these generally do not fall under the regulation. However, an important exception occurs when GPAIs are classified as high-risk systems. This is the case when they are marketed, integrated into a system, or qualified as systemically important.

The regulation of GPAIs within the AI Act reflects the need to keep pace with the dynamic nature of AI technologies while maintaining public safety and ethical standards. This requires continuous adjustment of the legal framework to keep up with technological developments and provide effective protection.

When Will the AI Act Come Into Effect?

After more than two years of drafting and negotiating the AI Act, EU legislators have finally reached a consensus. On February 2, 2024, the Member States of the European Union unanimously approved the regulation in the Committee of Permanent Representatives. In the plenary session on March 13, 2024, the EU Parliament gave the green light for the law. The AI Act is expected to come into force in May/June 2024 and will be applicable 24 months later, giving companies until 2026 to adjust their measures to meet the new requirements. However, some provisions will apply sooner: the prohibitions of AI systems with an unacceptable risk will take effect after just six months.

For more information, key texts, and much more, visit the EU Artificial Intelligence Act website.

How Is Compliance Ensured?

According to the AI Act, each EU Member State must designate a national supervisory authority. This authority is tasked with monitoring the application and implementation of the AI Regulation. It ensures that AI systems within its jurisdiction comply with established standards and do not pose risks to citizens or consumers.

The national supervisory authorities are also represented in the European Artificial Intelligence Board. This board serves as a coordination platform and advisory body, connecting the various national authorities and coordinating their activities. It plays a crucial role in creating a uniform regulatory landscape for AI in Europe and acts as a central forum for the exchange of best practices and regulatory approaches.

In addition to the establishment of national authorities and the European Board, a special Office for Artificial Intelligence (AI Office) has been set up within the European Commission. This office is specifically responsible for monitoring compliance with regulations concerning general-purpose AI models. It checks the compliance of AI applications distributed in the EU and ensures that they meet high standards of safety and ethics.

Violations of the AI Act can result in substantial fines, which underscores the importance of stringent monitoring and enforcement of the regulations. The prescribed penalties are intended to serve as a deterrent and promote the correct implementation and use of AI technologies, ensuring responsible handling of this powerful technology.

How Is Compliance Ensured?

According to the AI Act, each EU Member State must designate a national supervisory authority. This authority is tasked with monitoring the application and implementation of the AI Regulation. It ensures that AI systems within its jurisdiction comply with established standards and do not pose risks to citizens or consumers.

The national supervisory authorities are also represented in the European Artificial Intelligence Board. This board serves as a coordination platform and advisory body, connecting the various national authorities and coordinating their activities. It plays a crucial role in creating a uniform regulatory landscape for AI in Europe and acts as a central forum for the exchange of best practices and regulatory approaches.

In addition to the establishment of national authorities and the European Board, a special Office for Artificial Intelligence (AI Office) has been set up within the European Commission. This office is specifically responsible for monitoring compliance with regulations concerning general-purpose AI models. It checks the compliance of AI applications distributed in the EU and ensures that they meet high standards of safety and ethics.

Violations of the AI Act can result in substantial fines, which underscores the importance of stringent monitoring and enforcement of the regulations. The prescribed penalties are intended to serve as a deterrent and promote the correct implementation and use of AI technologies, ensuring responsible handling of this powerful technology.

How Is Compliance Ensured?

According to the AI Act, each EU Member State must designate a national supervisory authority. This authority is tasked with monitoring the application and implementation of the AI Regulation. It ensures that AI systems within its jurisdiction comply with established standards and do not pose risks to citizens or consumers.

The national supervisory authorities are also represented in the European Artificial Intelligence Board. This board serves as a coordination platform and advisory body, connecting the various national authorities and coordinating their activities. It plays a crucial role in creating a uniform regulatory landscape for AI in Europe and acts as a central forum for the exchange of best practices and regulatory approaches.

In addition to the establishment of national authorities and the European Board, a special Office for Artificial Intelligence (AI Office) has been set up within the European Commission. This office is specifically responsible for monitoring compliance with regulations concerning general-purpose AI models. It checks the compliance of AI applications distributed in the EU and ensures that they meet high standards of safety and ethics.

Violations of the AI Act can result in substantial fines, which underscores the importance of stringent monitoring and enforcement of the regulations. The prescribed penalties are intended to serve as a deterrent and promote the correct implementation and use of AI technologies, ensuring responsible handling of this powerful technology.

How Is Compliance Ensured?

According to the AI Act, each EU Member State must designate a national supervisory authority. This authority is tasked with monitoring the application and implementation of the AI Regulation. It ensures that AI systems within its jurisdiction comply with established standards and do not pose risks to citizens or consumers.

The national supervisory authorities are also represented in the European Artificial Intelligence Board. This board serves as a coordination platform and advisory body, connecting the various national authorities and coordinating their activities. It plays a crucial role in creating a uniform regulatory landscape for AI in Europe and acts as a central forum for the exchange of best practices and regulatory approaches.

In addition to the establishment of national authorities and the European Board, a special Office for Artificial Intelligence (AI Office) has been set up within the European Commission. This office is specifically responsible for monitoring compliance with regulations concerning general-purpose AI models. It checks the compliance of AI applications distributed in the EU and ensures that they meet high standards of safety and ethics.

Violations of the AI Act can result in substantial fines, which underscores the importance of stringent monitoring and enforcement of the regulations. The prescribed penalties are intended to serve as a deterrent and promote the correct implementation and use of AI technologies, ensuring responsible handling of this powerful technology.

How Is Compliance Ensured?

According to the AI Act, each EU Member State must designate a national supervisory authority. This authority is tasked with monitoring the application and implementation of the AI Regulation. It ensures that AI systems within its jurisdiction comply with established standards and do not pose risks to citizens or consumers.

The national supervisory authorities are also represented in the European Artificial Intelligence Board. This board serves as a coordination platform and advisory body, connecting the various national authorities and coordinating their activities. It plays a crucial role in creating a uniform regulatory landscape for AI in Europe and acts as a central forum for the exchange of best practices and regulatory approaches.

In addition to the establishment of national authorities and the European Board, a special Office for Artificial Intelligence (AI Office) has been set up within the European Commission. This office is specifically responsible for monitoring compliance with regulations concerning general-purpose AI models. It checks the compliance of AI applications distributed in the EU and ensures that they meet high standards of safety and ethics.

Violations of the AI Act can result in substantial fines, which underscores the importance of stringent monitoring and enforcement of the regulations. The prescribed penalties are intended to serve as a deterrent and promote the correct implementation and use of AI technologies, ensuring responsible handling of this powerful technology.

What Impact Will the AI Act Have on Companies and Developers of AI Technologies?

The AI Act will undoubtedly have far-reaching effects on companies and developers of AI technologies. The effort required to adjust to the new regulations will depend on the specific use case.

Let's consider Frontnow as a case study: Frontnow develops AI-powered tools for online retailers by combining various AI systems, including generative AI. The aim is to create products that improve the customer experience of online stores and lead to higher conversion rates, better rankings, and increased revenue. Such use cases, where a company develops specialized AI tools for its clients, will occur massively in the future. A unique feature of Frontnow is its exclusive use of data and information provided by customers. Frontnow completely abstains from collecting personal data and relies only on the questions that serve as input. This minimizes the risk of hallucinations – the generation of false information by the AI system. Another interesting point is the evaluation of the generative AI models that Frontnow uses. Companies that use AI models need to remain flexible and able to respond to innovations that arise from the introduction of the AI Act. However, each use case is unique and must be assessed individually to determine its specific impacts.

A key requirement of the AI Act is risk management. Organizations must conduct a comprehensive risk assessment for AI applications to precisely analyze and evaluate potential impacts on society, the environment, and personal rights. In addition, there are specific requirements for data quality. The data used for training must be relevant, representative, error-free, and complete. Automated logging of operations and comprehensive technical documentation are also required. Certain requirements for transparency and information apply to users. Moreover, human oversight and intervention in the AI system must be possible.

To comply with the AI Act's requirements in time, responsible parties should consider the following steps:

  • Create a detailed list of all AI systems and use cases in your company.
  • Classify your AI systems and applications according to the AI Act's risk category.
  • Implement the requirements of the AI Act technically and organizationally in your processes based on your risk categorization.
  • Establish structures that permanently monitor and report on AI systems and relevant operations.

By doing so, they can ensure compliance with regulations and ethical standards, and successfully position their AI offerings in a rapidly evolving legal landscape.

What Impact Will the AI Act Have on Companies and Developers of AI Technologies?

The AI Act will undoubtedly have far-reaching effects on companies and developers of AI technologies. The effort required to adjust to the new regulations will depend on the specific use case.

Let's consider Frontnow as a case study: Frontnow develops AI-powered tools for online retailers by combining various AI systems, including generative AI. The aim is to create products that improve the customer experience of online stores and lead to higher conversion rates, better rankings, and increased revenue. Such use cases, where a company develops specialized AI tools for its clients, will occur massively in the future. A unique feature of Frontnow is its exclusive use of data and information provided by customers. Frontnow completely abstains from collecting personal data and relies only on the questions that serve as input. This minimizes the risk of hallucinations – the generation of false information by the AI system. Another interesting point is the evaluation of the generative AI models that Frontnow uses. Companies that use AI models need to remain flexible and able to respond to innovations that arise from the introduction of the AI Act. However, each use case is unique and must be assessed individually to determine its specific impacts.

A key requirement of the AI Act is risk management. Organizations must conduct a comprehensive risk assessment for AI applications to precisely analyze and evaluate potential impacts on society, the environment, and personal rights. In addition, there are specific requirements for data quality. The data used for training must be relevant, representative, error-free, and complete. Automated logging of operations and comprehensive technical documentation are also required. Certain requirements for transparency and information apply to users. Moreover, human oversight and intervention in the AI system must be possible.

To comply with the AI Act's requirements in time, responsible parties should consider the following steps:

  • Create a detailed list of all AI systems and use cases in your company.
  • Classify your AI systems and applications according to the AI Act's risk category.
  • Implement the requirements of the AI Act technically and organizationally in your processes based on your risk categorization.
  • Establish structures that permanently monitor and report on AI systems and relevant operations.

By doing so, they can ensure compliance with regulations and ethical standards, and successfully position their AI offerings in a rapidly evolving legal landscape.

What Impact Will the AI Act Have on Companies and Developers of AI Technologies?

The AI Act will undoubtedly have far-reaching effects on companies and developers of AI technologies. The effort required to adjust to the new regulations will depend on the specific use case.

Let's consider Frontnow as a case study: Frontnow develops AI-powered tools for online retailers by combining various AI systems, including generative AI. The aim is to create products that improve the customer experience of online stores and lead to higher conversion rates, better rankings, and increased revenue. Such use cases, where a company develops specialized AI tools for its clients, will occur massively in the future. A unique feature of Frontnow is its exclusive use of data and information provided by customers. Frontnow completely abstains from collecting personal data and relies only on the questions that serve as input. This minimizes the risk of hallucinations – the generation of false information by the AI system. Another interesting point is the evaluation of the generative AI models that Frontnow uses. Companies that use AI models need to remain flexible and able to respond to innovations that arise from the introduction of the AI Act. However, each use case is unique and must be assessed individually to determine its specific impacts.

A key requirement of the AI Act is risk management. Organizations must conduct a comprehensive risk assessment for AI applications to precisely analyze and evaluate potential impacts on society, the environment, and personal rights. In addition, there are specific requirements for data quality. The data used for training must be relevant, representative, error-free, and complete. Automated logging of operations and comprehensive technical documentation are also required. Certain requirements for transparency and information apply to users. Moreover, human oversight and intervention in the AI system must be possible.

To comply with the AI Act's requirements in time, responsible parties should consider the following steps:

  • Create a detailed list of all AI systems and use cases in your company.
  • Classify your AI systems and applications according to the AI Act's risk category.
  • Implement the requirements of the AI Act technically and organizationally in your processes based on your risk categorization.
  • Establish structures that permanently monitor and report on AI systems and relevant operations.

By doing so, they can ensure compliance with regulations and ethical standards, and successfully position their AI offerings in a rapidly evolving legal landscape.

What Impact Will the AI Act Have on Companies and Developers of AI Technologies?

The AI Act will undoubtedly have far-reaching effects on companies and developers of AI technologies. The effort required to adjust to the new regulations will depend on the specific use case.

Let's consider Frontnow as a case study: Frontnow develops AI-powered tools for online retailers by combining various AI systems, including generative AI. The aim is to create products that improve the customer experience of online stores and lead to higher conversion rates, better rankings, and increased revenue. Such use cases, where a company develops specialized AI tools for its clients, will occur massively in the future. A unique feature of Frontnow is its exclusive use of data and information provided by customers. Frontnow completely abstains from collecting personal data and relies only on the questions that serve as input. This minimizes the risk of hallucinations – the generation of false information by the AI system. Another interesting point is the evaluation of the generative AI models that Frontnow uses. Companies that use AI models need to remain flexible and able to respond to innovations that arise from the introduction of the AI Act. However, each use case is unique and must be assessed individually to determine its specific impacts.

A key requirement of the AI Act is risk management. Organizations must conduct a comprehensive risk assessment for AI applications to precisely analyze and evaluate potential impacts on society, the environment, and personal rights. In addition, there are specific requirements for data quality. The data used for training must be relevant, representative, error-free, and complete. Automated logging of operations and comprehensive technical documentation are also required. Certain requirements for transparency and information apply to users. Moreover, human oversight and intervention in the AI system must be possible.

To comply with the AI Act's requirements in time, responsible parties should consider the following steps:

  • Create a detailed list of all AI systems and use cases in your company.
  • Classify your AI systems and applications according to the AI Act's risk category.
  • Implement the requirements of the AI Act technically and organizationally in your processes based on your risk categorization.
  • Establish structures that permanently monitor and report on AI systems and relevant operations.

By doing so, they can ensure compliance with regulations and ethical standards, and successfully position their AI offerings in a rapidly evolving legal landscape.

What Impact Will the AI Act Have on Companies and Developers of AI Technologies?

The AI Act will undoubtedly have far-reaching effects on companies and developers of AI technologies. The effort required to adjust to the new regulations will depend on the specific use case.

Let's consider Frontnow as a case study: Frontnow develops AI-powered tools for online retailers by combining various AI systems, including generative AI. The aim is to create products that improve the customer experience of online stores and lead to higher conversion rates, better rankings, and increased revenue. Such use cases, where a company develops specialized AI tools for its clients, will occur massively in the future. A unique feature of Frontnow is its exclusive use of data and information provided by customers. Frontnow completely abstains from collecting personal data and relies only on the questions that serve as input. This minimizes the risk of hallucinations – the generation of false information by the AI system. Another interesting point is the evaluation of the generative AI models that Frontnow uses. Companies that use AI models need to remain flexible and able to respond to innovations that arise from the introduction of the AI Act. However, each use case is unique and must be assessed individually to determine its specific impacts.

A key requirement of the AI Act is risk management. Organizations must conduct a comprehensive risk assessment for AI applications to precisely analyze and evaluate potential impacts on society, the environment, and personal rights. In addition, there are specific requirements for data quality. The data used for training must be relevant, representative, error-free, and complete. Automated logging of operations and comprehensive technical documentation are also required. Certain requirements for transparency and information apply to users. Moreover, human oversight and intervention in the AI system must be possible.

To comply with the AI Act's requirements in time, responsible parties should consider the following steps:

  • Create a detailed list of all AI systems and use cases in your company.
  • Classify your AI systems and applications according to the AI Act's risk category.
  • Implement the requirements of the AI Act technically and organizationally in your processes based on your risk categorization.
  • Establish structures that permanently monitor and report on AI systems and relevant operations.

By doing so, they can ensure compliance with regulations and ethical standards, and successfully position their AI offerings in a rapidly evolving legal landscape.

What Does AI Regulation Look Like Outside the EU?

While the EU implements the AI Act, the motto on the other side of the Atlantic is "Innovate first, regulate second." Under this approach, innovation leaders like Microsoft, Google, and Meta can continue to operate freely. This could result in the EU being at a disadvantage in global competition.

In contrast to the EU's uniform regulatory approach, which includes additional governance structures, the United Kingdom has opted for a sectoral approach to regulating AI technologies. Existing regulatory authorities are expected to use their current powers to respond to the challenges and opportunities presented by AI. The government has defined five cross-sectoral principles—safety and robustness, appropriate transparency and traceability, fairness, accountability and governance, and contestability and redress—which regulatory bodies are to interpret and apply. Despite the flexibility of the current approach, the government indicates that statutory regulations may be required in the future for certain applications of AI.

In October last year, the Biden administration issued the first guidelines for dealing with artificial intelligence (Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence). These establish safety standards in the handling of AI and build on a series of voluntary commitments. A new requirement is that developers of the most powerful AI systems must subject their models to safety tests and report the results to the authorities before they are released. This is particularly true for models that could potentially threaten national security, the economy, or public health. Additionally, federal agencies are to develop methods to label AI-generated content and counter the threat of deepfakes—manipulated video and audio recordings. The scope of the decree is limited and mainly includes recommendations and guidelines for agencies, without mandating actions for private companies.

There is no doubt that the potential risks of AI have been recognized in the USA, UK, and EU. However, similar to the case with the General Data Protection Regulation, the EU is pursuing a stronger regulatory approach, while the USA, for now, is relying more on voluntary measures. Which approach will ultimately be more successful in ethically managing AI and simultaneously leveraging the opportunities of the technology in international competition remains to be seen.

What Does AI Regulation Look Like Outside the EU?

While the EU implements the AI Act, the motto on the other side of the Atlantic is "Innovate first, regulate second." Under this approach, innovation leaders like Microsoft, Google, and Meta can continue to operate freely. This could result in the EU being at a disadvantage in global competition.

In contrast to the EU's uniform regulatory approach, which includes additional governance structures, the United Kingdom has opted for a sectoral approach to regulating AI technologies. Existing regulatory authorities are expected to use their current powers to respond to the challenges and opportunities presented by AI. The government has defined five cross-sectoral principles—safety and robustness, appropriate transparency and traceability, fairness, accountability and governance, and contestability and redress—which regulatory bodies are to interpret and apply. Despite the flexibility of the current approach, the government indicates that statutory regulations may be required in the future for certain applications of AI.

In October last year, the Biden administration issued the first guidelines for dealing with artificial intelligence (Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence). These establish safety standards in the handling of AI and build on a series of voluntary commitments. A new requirement is that developers of the most powerful AI systems must subject their models to safety tests and report the results to the authorities before they are released. This is particularly true for models that could potentially threaten national security, the economy, or public health. Additionally, federal agencies are to develop methods to label AI-generated content and counter the threat of deepfakes—manipulated video and audio recordings. The scope of the decree is limited and mainly includes recommendations and guidelines for agencies, without mandating actions for private companies.

There is no doubt that the potential risks of AI have been recognized in the USA, UK, and EU. However, similar to the case with the General Data Protection Regulation, the EU is pursuing a stronger regulatory approach, while the USA, for now, is relying more on voluntary measures. Which approach will ultimately be more successful in ethically managing AI and simultaneously leveraging the opportunities of the technology in international competition remains to be seen.

What Does AI Regulation Look Like Outside the EU?

While the EU implements the AI Act, the motto on the other side of the Atlantic is "Innovate first, regulate second." Under this approach, innovation leaders like Microsoft, Google, and Meta can continue to operate freely. This could result in the EU being at a disadvantage in global competition.

In contrast to the EU's uniform regulatory approach, which includes additional governance structures, the United Kingdom has opted for a sectoral approach to regulating AI technologies. Existing regulatory authorities are expected to use their current powers to respond to the challenges and opportunities presented by AI. The government has defined five cross-sectoral principles—safety and robustness, appropriate transparency and traceability, fairness, accountability and governance, and contestability and redress—which regulatory bodies are to interpret and apply. Despite the flexibility of the current approach, the government indicates that statutory regulations may be required in the future for certain applications of AI.

In October last year, the Biden administration issued the first guidelines for dealing with artificial intelligence (Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence). These establish safety standards in the handling of AI and build on a series of voluntary commitments. A new requirement is that developers of the most powerful AI systems must subject their models to safety tests and report the results to the authorities before they are released. This is particularly true for models that could potentially threaten national security, the economy, or public health. Additionally, federal agencies are to develop methods to label AI-generated content and counter the threat of deepfakes—manipulated video and audio recordings. The scope of the decree is limited and mainly includes recommendations and guidelines for agencies, without mandating actions for private companies.

There is no doubt that the potential risks of AI have been recognized in the USA, UK, and EU. However, similar to the case with the General Data Protection Regulation, the EU is pursuing a stronger regulatory approach, while the USA, for now, is relying more on voluntary measures. Which approach will ultimately be more successful in ethically managing AI and simultaneously leveraging the opportunities of the technology in international competition remains to be seen.

What Does AI Regulation Look Like Outside the EU?

While the EU implements the AI Act, the motto on the other side of the Atlantic is "Innovate first, regulate second." Under this approach, innovation leaders like Microsoft, Google, and Meta can continue to operate freely. This could result in the EU being at a disadvantage in global competition.

In contrast to the EU's uniform regulatory approach, which includes additional governance structures, the United Kingdom has opted for a sectoral approach to regulating AI technologies. Existing regulatory authorities are expected to use their current powers to respond to the challenges and opportunities presented by AI. The government has defined five cross-sectoral principles—safety and robustness, appropriate transparency and traceability, fairness, accountability and governance, and contestability and redress—which regulatory bodies are to interpret and apply. Despite the flexibility of the current approach, the government indicates that statutory regulations may be required in the future for certain applications of AI.

In October last year, the Biden administration issued the first guidelines for dealing with artificial intelligence (Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence). These establish safety standards in the handling of AI and build on a series of voluntary commitments. A new requirement is that developers of the most powerful AI systems must subject their models to safety tests and report the results to the authorities before they are released. This is particularly true for models that could potentially threaten national security, the economy, or public health. Additionally, federal agencies are to develop methods to label AI-generated content and counter the threat of deepfakes—manipulated video and audio recordings. The scope of the decree is limited and mainly includes recommendations and guidelines for agencies, without mandating actions for private companies.

There is no doubt that the potential risks of AI have been recognized in the USA, UK, and EU. However, similar to the case with the General Data Protection Regulation, the EU is pursuing a stronger regulatory approach, while the USA, for now, is relying more on voluntary measures. Which approach will ultimately be more successful in ethically managing AI and simultaneously leveraging the opportunities of the technology in international competition remains to be seen.

What Does AI Regulation Look Like Outside the EU?

While the EU implements the AI Act, the motto on the other side of the Atlantic is "Innovate first, regulate second." Under this approach, innovation leaders like Microsoft, Google, and Meta can continue to operate freely. This could result in the EU being at a disadvantage in global competition.

In contrast to the EU's uniform regulatory approach, which includes additional governance structures, the United Kingdom has opted for a sectoral approach to regulating AI technologies. Existing regulatory authorities are expected to use their current powers to respond to the challenges and opportunities presented by AI. The government has defined five cross-sectoral principles—safety and robustness, appropriate transparency and traceability, fairness, accountability and governance, and contestability and redress—which regulatory bodies are to interpret and apply. Despite the flexibility of the current approach, the government indicates that statutory regulations may be required in the future for certain applications of AI.

In October last year, the Biden administration issued the first guidelines for dealing with artificial intelligence (Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence). These establish safety standards in the handling of AI and build on a series of voluntary commitments. A new requirement is that developers of the most powerful AI systems must subject their models to safety tests and report the results to the authorities before they are released. This is particularly true for models that could potentially threaten national security, the economy, or public health. Additionally, federal agencies are to develop methods to label AI-generated content and counter the threat of deepfakes—manipulated video and audio recordings. The scope of the decree is limited and mainly includes recommendations and guidelines for agencies, without mandating actions for private companies.

There is no doubt that the potential risks of AI have been recognized in the USA, UK, and EU. However, similar to the case with the General Data Protection Regulation, the EU is pursuing a stronger regulatory approach, while the USA, for now, is relying more on voluntary measures. Which approach will ultimately be more successful in ethically managing AI and simultaneously leveraging the opportunities of the technology in international competition remains to be seen.

Outlook on the EU AI Act: Curse or Blessing for Businesses?

The EU is taking an important step with the AI Act to responsibly regulate the use of AI. It attempts to keep up with the rapid development of AI technologies by employing dynamic regulatory approaches. By carefully adapting to the new legal requirements, companies can help create an ethical and trustworthy AI landscape that meets both consumer needs and societal goals.

While the regulation aims to minimize foreseeable harms caused by AI, it also presents challenges and uncertainties. The regulation is characterized by a multitude of undefined legal terms, which can potentially lead to legal uncertainty and raise numerous detailed questions. The AI Act will undoubtedly impact the technical and personnel efforts that companies must make to meet regulatory requirements. This could lead to increased costs and additional administrative burdens. It remains to be seen how effective the regulation will ultimately be and what adjustments may be necessary to ensure that it appropriately considers both consumer protection and the promotion of innovation.

About the Author

Eva Dröghoff is a legal trainee at the Brandenburg Higher Regional Court with a strong passion for the intersection of law and technology. She is actively shaping this intersection as a Legal Trainee at Aleph-Alpha. Since May 2022, she has also been working with the Legal Tech Association Germany on the digitalization and innovation of the German legal market. After studying law in Heidelberg and Münster, she passed her First State Examination in August 2022. She then completed a Master of Laws (specialization: Law and Technology) at Tel Aviv University, where she intensively dealt with the topic of Responsible AI.

Sources

Outlook on the EU AI Act: Curse or Blessing for Businesses?

The EU is taking an important step with the AI Act to responsibly regulate the use of AI. It attempts to keep up with the rapid development of AI technologies by employing dynamic regulatory approaches. By carefully adapting to the new legal requirements, companies can help create an ethical and trustworthy AI landscape that meets both consumer needs and societal goals.

While the regulation aims to minimize foreseeable harms caused by AI, it also presents challenges and uncertainties. The regulation is characterized by a multitude of undefined legal terms, which can potentially lead to legal uncertainty and raise numerous detailed questions. The AI Act will undoubtedly impact the technical and personnel efforts that companies must make to meet regulatory requirements. This could lead to increased costs and additional administrative burdens. It remains to be seen how effective the regulation will ultimately be and what adjustments may be necessary to ensure that it appropriately considers both consumer protection and the promotion of innovation.

About the Author

Eva Dröghoff is a legal trainee at the Brandenburg Higher Regional Court with a strong passion for the intersection of law and technology. She is actively shaping this intersection as a Legal Trainee at Aleph-Alpha. Since May 2022, she has also been working with the Legal Tech Association Germany on the digitalization and innovation of the German legal market. After studying law in Heidelberg and Münster, she passed her First State Examination in August 2022. She then completed a Master of Laws (specialization: Law and Technology) at Tel Aviv University, where she intensively dealt with the topic of Responsible AI.

Sources

Outlook on the EU AI Act: Curse or Blessing for Businesses?

The EU is taking an important step with the AI Act to responsibly regulate the use of AI. It attempts to keep up with the rapid development of AI technologies by employing dynamic regulatory approaches. By carefully adapting to the new legal requirements, companies can help create an ethical and trustworthy AI landscape that meets both consumer needs and societal goals.

While the regulation aims to minimize foreseeable harms caused by AI, it also presents challenges and uncertainties. The regulation is characterized by a multitude of undefined legal terms, which can potentially lead to legal uncertainty and raise numerous detailed questions. The AI Act will undoubtedly impact the technical and personnel efforts that companies must make to meet regulatory requirements. This could lead to increased costs and additional administrative burdens. It remains to be seen how effective the regulation will ultimately be and what adjustments may be necessary to ensure that it appropriately considers both consumer protection and the promotion of innovation.

About the Author

Eva Dröghoff is a legal trainee at the Brandenburg Higher Regional Court with a strong passion for the intersection of law and technology. She is actively shaping this intersection as a Legal Trainee at Aleph-Alpha. Since May 2022, she has also been working with the Legal Tech Association Germany on the digitalization and innovation of the German legal market. After studying law in Heidelberg and Münster, she passed her First State Examination in August 2022. She then completed a Master of Laws (specialization: Law and Technology) at Tel Aviv University, where she intensively dealt with the topic of Responsible AI.

Sources

Outlook on the EU AI Act: Curse or Blessing for Businesses?

The EU is taking an important step with the AI Act to responsibly regulate the use of AI. It attempts to keep up with the rapid development of AI technologies by employing dynamic regulatory approaches. By carefully adapting to the new legal requirements, companies can help create an ethical and trustworthy AI landscape that meets both consumer needs and societal goals.

While the regulation aims to minimize foreseeable harms caused by AI, it also presents challenges and uncertainties. The regulation is characterized by a multitude of undefined legal terms, which can potentially lead to legal uncertainty and raise numerous detailed questions. The AI Act will undoubtedly impact the technical and personnel efforts that companies must make to meet regulatory requirements. This could lead to increased costs and additional administrative burdens. It remains to be seen how effective the regulation will ultimately be and what adjustments may be necessary to ensure that it appropriately considers both consumer protection and the promotion of innovation.

About the Author

Eva Dröghoff is a legal trainee at the Brandenburg Higher Regional Court with a strong passion for the intersection of law and technology. She is actively shaping this intersection as a Legal Trainee at Aleph-Alpha. Since May 2022, she has also been working with the Legal Tech Association Germany on the digitalization and innovation of the German legal market. After studying law in Heidelberg and Münster, she passed her First State Examination in August 2022. She then completed a Master of Laws (specialization: Law and Technology) at Tel Aviv University, where she intensively dealt with the topic of Responsible AI.

Sources

Outlook on the EU AI Act: Curse or Blessing for Businesses?

The EU is taking an important step with the AI Act to responsibly regulate the use of AI. It attempts to keep up with the rapid development of AI technologies by employing dynamic regulatory approaches. By carefully adapting to the new legal requirements, companies can help create an ethical and trustworthy AI landscape that meets both consumer needs and societal goals.

While the regulation aims to minimize foreseeable harms caused by AI, it also presents challenges and uncertainties. The regulation is characterized by a multitude of undefined legal terms, which can potentially lead to legal uncertainty and raise numerous detailed questions. The AI Act will undoubtedly impact the technical and personnel efforts that companies must make to meet regulatory requirements. This could lead to increased costs and additional administrative burdens. It remains to be seen how effective the regulation will ultimately be and what adjustments may be necessary to ensure that it appropriately considers both consumer protection and the promotion of innovation.

About the Author

Eva Dröghoff is a legal trainee at the Brandenburg Higher Regional Court with a strong passion for the intersection of law and technology. She is actively shaping this intersection as a Legal Trainee at Aleph-Alpha. Since May 2022, she has also been working with the Legal Tech Association Germany on the digitalization and innovation of the German legal market. After studying law in Heidelberg and Münster, she passed her First State Examination in August 2022. She then completed a Master of Laws (specialization: Law and Technology) at Tel Aviv University, where she intensively dealt with the topic of Responsible AI.

Sources