Error loading page.
Try refreshing the page. If that doesn't work, there may be a network issue, and you can use our self test page to see what's preventing the page from loading.
Learn more about possible network issues or contact support for more help.

A Dangerous Master

How to Keep Technology from Slipping Beyond Our Control

ebook
1 of 2 copies available
1 of 2 copies available
We live in an age of awesome technological potential. From nanotechnology to synthetic organisms, new technologies stand to revolutionize whole domains of human experience. But with awesome potential comes awesome risk: drones can deliver a bomb as readily as they can a new smartphone; makers and hackers can 3D-print guns as well as tools; and supercomputers can short-circuit Wall Street just as easily as they can manage your portfolio.
One thing these technologies can't do is answer the profound moral issues they raise. Who should be held accountable when they go wrong? What responsibility do we, as creators and users, have for the technologies we build? In A Dangerous Master, ethicist Wendell Wallach tackles such difficult questions with hard-earned authority, imploring both producers and consumers to face the moral ambiguities arising from our rapid technological growth. There is no doubt that scientific research and innovation are a source of promise and productivity, but, as Wallach, argues, technological development is at risk of becoming a juggernaut beyond human control. Examining the players, institutions, and values lobbying against meaningful regulation of everything from autonomous robots to designer drugs, A Dangerous Master proposes solutions for regaining control of our technological destiny.
Wallach's nuanced study offers both stark warnings and hope, navigating both the fears and hype surrounding technological innovations. An engaging, masterful analysis of the elements we must manage in our quest to survive as a species, A Dangerous Master forces us to confront the practical — and moral — purposes of our creations.
  • Creators

  • Publisher

  • Release date

  • Formats

  • Languages

  • Reviews

    • Publisher's Weekly

      April 6, 2015
      New technologies offer the lure of potentially improving human lives, but this thoughtful polemic convincingly argues that “In striving to answer the question ‘can we do this?’ too few ask ‘should we do this?’ ” Wallach (Moral Machines), of Yale University’s Interdisciplinary Center for Bioethics, admits that while no one immediately understood the dangers of such items as X-rays or asbestos, in other instances obvious dangers were brushed aside because proponents assumed that the benefits exceeded the risks. He emphasizes that every new technology passes through an “inflection point” where the general public, policy planners, and scholars can reflect on its impact before it enters the marketplace and develops a momentum of its own. Unsettling chapters describe what we should be—but mostly aren’t—discussing about transformative technologies such as genetic manipulation, radical life extension, killer robots, and computer-guided medical care. The obligatory how-to-fix-it conclusion urges legislators to ignore ideology, the lay public to use common sense, and engineers to design for responsibility as well as performance. Readers will admire this astute analysis while harboring the uneasy feeling that the barn door seems stuck open.

    • Kirkus

      April 15, 2015
      Never mind the zombies and vampires. Worry about the cyborgs and nanobots-the real things, in other words. So how do we keep such creatures from killing us in our sleep? That's a question that is occupying the attention of not just sci-fi writers, but also ethicists such as Wallach (Interdisciplinary Center for Bioethics/Yale Univ.; co-author: Moral Machines: Teaching Robots Right from Wrong, 2008), who works the rich vein explored by Edward Tenner's and Donald Norman's looks at the Murphy's Law-ish world of unintended consequences wrought by human design. Tinkering with the deepest levels of subatomic particles may produce a big bang sufficient to end our existence; building ever smarter robots may produce one so smart that the robots decide that humans are pests. On that score, Wallach notes that though Isaac Asimov's laws of robots assert that robots may not hurt us, "in story after story, Asimov illustrates how difficult it would be to design robots that follow these simple ethical rules." Lest none of the current generation of robot designers even thinks about these things, Wallach looks at some of the overall ethical problems regarding complex systems, reviews hopeful developments in the field of resilience engineering, and generally advocates a more careful approach to building and thinking about things that may kill us, whether meant to do so or not. Figuring nicely in his discussion is London's "Wibbly Wobbly Bridge," which illustrates the point that "mechanical systems are naturally prone to move from orderly to chaotic behavior." Alas, human systems do as well, which occasions his call for better monitoring, modeling, and imagining the what-ifs. Wallach describes himself as a "friendly skeptic" with respect to some aspects of technology, but readers may incline to gloom after reading all the ways things technological can go south. A well-mounted argument that deserves wide consideration.

      COPYRIGHT(2015) Kirkus Reviews, ALL RIGHTS RESERVED.

    • Library Journal

      June 1, 2015

      Wallach (bioethics, Yale Univ.; Moral Machines: Teaching Robots Right from Wrong) argues that technological development often outpaces moral and ethical considerations and issues this call for vigilance to counter technoutopianism, a method in which technical solutions exist for every technological problem. He notes that inflection points may be present that provide opportunities to change the course or rate of a technology toward better societal outcomes, but barring a tragic accident or groundbreaking discovery, these may pass unnoticed until it is too late to effect a change in policy. Wallach provides many case studies of risk analysis in health, the environment, and military applications, highlighting potential dangers or unknown consequences. His proposed solutions include ethical engineering from the first developments of emerging technologies; coordinating committees from industry, government, and advocacy groups that develop regulating policy; and the creation of an informed and engaged citizenry. His cautious style is inherently conservative, but his acceptance of trade-offs that must be made to reap the benefits of technology prevent him from coming off as a Luddite. VERDICT This appeal for deliberate and thoughtful approaches to humankind's future will find its audience among those interested in ethics, public policy, and the future of health care.--Wade M. Lee, Univ. of Toledo Libs.

      Copyright 2015 Library Journal, LLC Used with permission.

Formats

  • Kindle Book
  • OverDrive Read
  • EPUB ebook

Languages

  • English

Loading