|
主讲人 |
常象宇 |
简介 |
<p>As Large Language Models (LLMs) become integral to business operations, strategic decision-making, and customer-facing services, two pressing management-level questions emerge: What is the true economic value of the data fueling these models? And how resilient are these models to adversarial risks that could undermine trust and compliance? This presentation introduces two recent advancements that jointly address these concerns. The first introduces an asymmetric data valuation framework, which enables structure-aware assessment of data utility, particularly in scenarios involving augmented (e.g., systhetic data produced by generative model) and dependent data sources (e.g., augmented data created by previous data set). It offers actionable insights for data monetization, cost-effective model training, and fair contribution evaluation in data markets. The second highlights the rising vulnerability of aligned LLMs to adversarial prompt attacks, revealing how models can be systematically “jailbroken” to produce harmful outputs despite safeguards. A two-stage attack framework achieves near-total success rates, prompting serious implications for risk governance and AI compliance frameworks. Together, these perspectives provide business leaders, data strategists, and AI risk managers with a sharper lens to assess value creation and exposure in deploying LLM-based systems.</p>
<p class="MsoNormal" style="line-height:20.0pt;mso-line-height-rule:exactly"><b><span lang="EN-US" style="font-size:14.0pt;mso-bidi-font-size:11.0pt;font-family:"Times New Roman",serif;mso-fareast-font-family: 楷体"><o:p></o:p></span></b></p>
<p> </p> |