去年,应用就绪的生成式人工智能工具的突然出现给我们带来了具有挑战性的社会和道德问题。关于这项技术如何深刻改变我们工作、学习和生活方式的愿景也加速了关于如何以及是否可以负责任地使用这些技术的对话和令人窒息的媒体头条新闻。
当然,负责任的技术使用并不是什么新鲜事。该术语涵盖了广泛的关注点,从算法中可能隐藏的偏见,到应用程序用户的数据隐私权,再到新工作方式对环境的影响。技术咨询公司 Thoughtworks 的荣誉首席技术官丽贝卡·帕森斯 (Rebecca Parsons) 在“构建公平的技术未来”中收集了所有这些担忧,即随着新技术的部署,其好处将得到平等分享。“随着技术在人们生活的重要方面变得越来越重要,”她说,“我们希望设想一个技术适合每个人的未来。”
帕森斯指出,技术的使用经常会出错,“因为我们过于关注自己对美好事物的看法,或者过于关注某一特定受众,而不是更广泛的受众。” 这可能看起来像是一个应用程序开发人员只为一个想象中的客户构建,他们与他拥有相同的地理位置、教育背景和富裕程度,或者一个产品团队没有考虑恶意行为者可能会对他们的生态系统造成什么损害。“我们认为人们会按照我希望他们使用我的产品的方式使用我的产品,以我希望他们解决的方式解决我希望他们解决的问题,”帕森斯说。“但当事情发生在现实世界中时,情况并非如此。”
AI, of course, poses some distinct social and ethical challenges. Some of the technology’s unique challenges are inherent in the way that AI works: its statistical rather than deterministic nature, its identification and perpetuation of patterns from past data (thus reinforcing existing biases), and its lack of awareness about what it doesn’t know (resulting in hallucinations). And some of its challenges stem from what AI’s creators and users themselves don’t know: the unexamined bodies of data underlying AI models, the limited explainability of AI outputs, and the technology’s ability to deceive users into treating it as a reasoning human intelligence.
Parsons believes, however, that AI has not changed responsible tech so much as it has brought some of its problems into a new focus. Concepts of intellectual property, for example, date back hundreds of years, but the rise of large language models (LLMs) has posed new questions about what constitutes fair use when a machine can be trained to emulate a writer’s voice or an artist’s style. “It’s not responsible tech if you're violating somebody’s intellectual property, but thinking about that was a whole lot more straightforward before we had LLMs,” she says.
The principles developed over many decades of responsible technology work still remain relevant during this transition. Transparency, privacy and security, thoughtful regulation, attention to societal and environmental impacts, and enabling wider participation via diversity and accessibility initiatives remain the keys to making technology work toward human good.
MIT Technology Review Insights’ 2023 report with Thoughtworks, “The state of responsible technology,” found that executives are taking these considerations seriously. Seventy-three percent of business leaders surveyed, for example, agreed that responsible technology use will come to be as important as business and financial considerations when making technology decisions.
This AI moment, however, may represent a unique opportunity to overcome barriers that have previously stalled responsible technology work. Lack of senior management awareness (cited by 52% of those surveyed as a top barrier to adopting responsible practices) is certainly less of a concern today: savvy executives are quickly becoming fluent in this new technology and are continually reminded of its potential consequences, failures, and societal harms.
The other top barriers cited were organizational resistance to change (46%) and internal competing priorities (46%). Organizations that have realigned themselves behind a clear AI strategy, and who understand its industry-altering potential, may be able to overcome this inertia and indecision as well. At this singular moment of disruption, when AI provides both the tools and motivation to redesign many of the ways in which we work and live, we can fold responsible technology principles into that transition—if we choose to.
就帕森斯而言,她对人类利用人工智能向善的能力深感乐观,并通过常识性指导方针和精心设计的流程以及人类护栏来解决人工智能的局限性。“作为技术专家,我们非常专注于我们试图解决的问题以及我们如何解决它,”她说。“所有负责任的技术真正的意义在于抬起头,环顾四周,看看世界上还有谁和我在一起。”
要了解有关 Thoughtworks 对负责任技术的分析和建议的更多信息,请访问其 Looking Glass 2024。
此内容由《麻省理工科技评论》的定制内容部门 Insights 制作。它不是由《麻省理工科技评论》的编辑人员撰写的。