
Interview with Andrea Pertici – CIO Fashion & Luxury and Enrico Fantaguzzi Co-founder Digital Fashion Academy.
The article is based on the conversation that took place at Forum Retail 2025 in Milan Italy
How is AI transforming strategic processes in fashion and luxury?
AI is transforming strategic processes in the fashion and luxury sectors by necessitating a shift from purely technological implementation to a holistic focus on organizational culture, risk management, and data governance. According to industry experts, the integration of AI requires the same level of strategic and organizational vision previously applied to innovations like Blockchain or ERP systems, as technology alone cannot solve business problems and may even create new ones without a clear strategy.
The transformation of strategic processes is evident in the following key areas:
- Risk Assessment and Governance: With the introduction of regulations such as the European AI Act, companies must now evaluate all AI investments and implementations based on the level of risk they introduce to the organisation. Strategies must account for four levels of risk, ranging from “unacceptable” (which must be avoided) to “minimal” risk. Furthermore, leadership must manage “Shadow AI”—the phenomenon where generative AI interfaces allow any employee to introduce digital tools independently, bypassing standard vetting processes and creating potential security risks.
- Economic and Financial Planning: AI implementation is fundamentally altering cost structures; it is typically an operating cost (OpEx) that directly impacts a company’s profit margins. Unlike previous IT evolutions, the introduction of AI does not necessarily reduce traditional IT costs, and recovering these expenses through efficiency gains, sales increases, or personnel reduction is currently difficult in the fashion sector. Consequently, rigorous business plans are essential, as current data suggests the success rate for AI applied to the value chain is only around 5%,.
- Data Consolidation as a Foundation: A successful AI strategy relies heavily on an organisation’s capacity to consolidate and valorise its own data. The foundation of any AI initiative must be integrated processes and data; without this, companies risk “vendor lock-in,” where they lose control over data and competencies to external services, retaining control only over payment processing.
- Process Optimisation vs. Sub-optimisation: Strategic planning must ensure that applying AI to specific use cases, such as value chain transformation, does not sub-optimise other areas,. Decision-makers must evaluate whether to develop, assemble, or buy services while ensuring that optimising one process does not simply shift costs to another part of the organisation.
Ultimately, the successful transformation of strategic processes depends less on the technology itself and more on the culture and identity of the company, ensuring that AI adoption is driven by a clear understanding of business needs rather than spontaneous adoption.
What are the primary risks of adopting AI in organisations?
The primary risks of adopting AI in organisations can be categorised into regulatory, security, financial, and strategic challenges:
Regulatory and Compliance Risks With the introduction of the AI Act in Europe, companies face strict compliance obligations based on four levels of risk, ranging from “minimal” to “unacceptable”. Organisations must carefully evaluate investments to ensure they avoid “unacceptable” risks, such as unauthorised facial recognition or social engineering, which are prohibited.
Security and Governance Risks (“Shadow AI”) A significant risk is the emergence of “Shadow AI”, a phenomenon where the ease of access to generative AI (conversational interfaces) allows any employee to introduce digital tools independently. This bypasses standard vetting processes, meaning individuals can subscribe to and use instruments that may introduce security vulnerabilities or data leaks without the organisation’s oversight.
Financial and Economic Risks The financial implications of AI adoption pose serious risks to profitability:
• Impact on Margins: AI is typically an operating cost (OpEx), which directly affects profit margins.
• Additive Costs: Unlike previous technological shifts, AI does not necessarily reduce traditional IT costs; instead, it tends to be an additive expense. While revenues for AI vendors are projected to grow significantly, this implies increased costs for client companies.
• Low Success Rates: Returns on investment are not guaranteed. When applied to complex processes like the value chain (rather than simple personal productivity), the success rate is estimated to be as low as 5%,.
• Difficulty in Cost Recovery: Recovering these costs through personnel reduction, efficiency gains, or sales increases is currently difficult within the fashion and luxury sectors.
Strategic and Operational Risks
• Vendor Lock-in: There is a major risk of entering a state of “lock-in”, where an organisation relies so heavily on external services that it loses control over its own data and internal competencies. In this scenario, the company effectively retains control only over paying the invoices, while the core intelligence resides externally.
• Process Sub-optimisation: Without a holistic view, there is a risk of optimizing one specific process while inadvertently sub-optimising others, simply shifting costs or inefficiencies to a different part of the organisation rather than eliminating them.
• Lack of Strategic Vision: Adopting AI spontaneously without a clear organisational and cultural strategy can create more problems than it solves in the short term
Why is the success rate for business AI implementation currently low?
he success rate for AI implementation in business processes—specifically regarding the transformation of the value chain—is estimated to be as low as 5%. This low success rate is attributed to several structural, financial, and organisational factors:
• Complexity of Value Chain Integration: While “embedded” AI tools for personal productivity (e.g., in browsers or office suites) are easily adopted, applying AI to complex corporate processes and the value chain is far more difficult. This type of implementation requires deep integration rather than simple adoption.
• Additive Costs and ROI Challenges: Unlike previous technological shifts, introducing AI does not necessarily reduce traditional IT costs; instead, it often acts as an additive Operating Cost (OpEx) that directly impacts profit margins. Recovering these costs—whether through personnel reduction, efficiency gains, or increased sales—is currently proving difficult in the fashion and luxury sectors.
• Data and Process Maturity: Success depends heavily on an organisation’s existing culture and its ability to consolidate and valorise its own data. Without a solid foundation of integrated data and processes, companies cannot effectively deploy AI and risk “vendor lock-in,” where they rely entirely on external services and lose control over their internal competencies and assets.
• Risk of Sub-optimisation: There is a strategic risk that optimising one specific process via AI may inadvertently sub-optimise other areas, simply shifting costs or inefficiencies to different parts of the organisation rather than eliminating them.
• Hidden Implementation Costs: The cost of AI is never limited to the service fee alone; it involves a significant learning curve, as well as costs related to training, infrastructure, and networking
What does ‘Shadow AI’ mean for fashion industry security?
“Shadow AI” in the context of the fashion industry refers to the phenomenon where employees independently subscribe to and introduce AI tools into the company without official oversight or IT vetting.
According to Andrea Pertici, a CIO for global fashion brands, this presents specific security implications:
• Bypassing Safety Governance: The primary risk is that these tools bypass the rigorous risk assessments now required by regulations like the European AI Act. Companies are legally obliged to evaluate AI implementations against four levels of risk (from “minimal” to “unacceptable,” such as unauthorised facial recognition or social engineering), but Shadow AI prevents this evaluation from happening.
• Zero-Barrier Access: Unlike previous complex technologies, the “conversational interface” of generative AI has removed all technical barriers. Any employee who can type or speak can now introduce digital instruments that may contain security vulnerabilities, effectively creating a hidden layer of technology that the organisation cannot control or secure.
• Evolution of Shadow IT: It is described as the modern evolution of “Shadow IT,” where the ease of access to powerful tools increases the likelihood of data leaks or compliance violations because the central organisation is unaware the technology is even being used
How can brands avoid ‘vendor lock-in’ while using AI?
To avoid “vendor lock-in,” brands must ensure they do not reach a state where data, services, and competencies reside entirely externally, leaving the company with control over nothing but the payment of invoices.
According to the sources, the primary strategy to prevent this involves establishing a strong internal foundation rather than simply purchasing solutions. This includes:
• Consolidating and Valorising Internal Data: The crucial starting point is the organisation’s capacity to consolidate its own data. A company must have integrated processes and data as a “foundation”. Without this internal backbone, the company becomes overly reliant on the vendor’s infrastructure to manage its core assets.
• Strategic Sourcing Decisions: Once that data foundation is established, the company is better positioned to make informed strategic decisions about whether to “develop, assemble, or buy” specific services on a case-by-case basis,. This allows the brand to integrate external AI where efficient without handing over total control of the value chain.
• Retaining Internal Competencies: The risk of lock-in is explicitly linked to the externalisation of competencies. Therefore, brands must ensure that while they may use external tools, the strategic understanding and management of these processes remain within the company to avoid dependency.
How can fashion brands prevent employees from using Shadow AI?
Preventing “Shadow AI” is not presented as a simple technical block, but rather as a challenge requiring a shift in organisational strategy, culture, and governance.
Fashion brands can address and mitigate the use of Shadow AI through the following strategic measures:
• Implementing Formal Risk Assessment (AI Act): Companies must replace spontaneous adoption with a formal evaluation process. Under regulations like the AI Act, organisations are obligated to evaluate all AI implementations based on four levels of risk, ranging from “minimal” to “unacceptable” (such as unauthorised facial recognition or social engineering). By enforcing this vetting process, brands ensure that no digital tool enters the company ecosystem without passing through this governance filter.
• Shifting from Spontaneous to Strategic Adoption: The primary driver of Shadow AI is “spontaneous adoption” by employees. To prevent this, leadership must impose a “strategic, organisational, and cultural vision” similar to past implementations of PLM or ERP systems. The organisation must actively define how AI solves business problems so that employees do not feel the need to independently seek outside tools.
• Centralising Data and Competencies: A key preventive measure is ensuring the organisation creates a strong internal foundation. By focusing on the “culture and capacity… to consolidate and valorise its own data”, the company provides a legitimate, integrated infrastructure. If the company provides integrated processes and data, it reduces the incentive for employees to use external “Shadow” tools that might lead to “vendor lock-in” or data loss.
Why prevention is difficult: The sources note that preventing Shadow AI is particularly challenging because Generative AI (specifically the conversational interface) has “completely broken down any barrier to usage”. Unlike complex legacy systems, anyone who can type or speak can now independently subscribe to and introduce these tools, bypassing traditional IT hurdles. Therefore, the solution is less about technical barriers and more about corporate culture and adherence to the new regulatory frameworks