DBVAL Defined benefit valuations and web benefit projections
DCVAL Defined contribution, balance forward recordkeeping,
ESOPs and compliance
OPEVS Post-employment benefits
Test Wyz DB/DC compliance testing
Find out more
Discover how your organization can benefit from partnering with a leading-edge technology provider. For more information, call us at 800-505-9076
or email us at rsmarketing@wystar.com. You can also visit our website at
www.wystar.com/solutions/software.
Pension software solutions
from WySTAR Global
© 2011 WySTAR Global Retirement Solutions. All rights reserved. ECG-634603
634603 WySTARad.indd 1 9/26/11 10:07 AM
As we develop AI to become increasingly efficient at processing and
analyzing data, we must think ahead
about areas of society that are most at
risk of being dramatically altered. A robust AI policy plan will need to outline
major areas of concern while still allowing expansion or refinement in the future
as the path of artificial intelligence becomes more clear. Such a policy can only
be built over time, with great effort and
the collaboration of many experts and
stakeholders. And the longer we delay
those discussions, the greater risk we
place upon ourselves.
We Need to Act Now
Elon Musk, founder of Tesla Motors and
Space Exploration Technologies Corporation (SpaceX), addressed the possible
societal impacts of artificial intelligence
last year. “There is a pretty good chance
we end up with a universal basic income,
or something like that, due to automation,” Musk told CNBC, later adding,
“there has to be some improved symbiosis with digital super intelligence.”[ 6]
Whether that’s a cautious warning or a
call for optimism about our robotic future depends on your perspective—a
universal basic income would involve
enormous changes to world economies,
but then again, artificially intelligent systems may do the very same. Either way,
it’s a clear indication that Musk sees AI
playing a very large role in our future
society.
There’s still a very large debate go-
ing on in the artificial intelligence field
between people who believe “true” AI is
ultimately a mythical concept that we’ll
never produce and people who believe
it is both very possible and very likely
catastrophic. The former group claims
that fears of a menacing, destructive AI
program have no basis in reality—that
machine learning is fundamentally dif-
ferent than true intelligence, which
means that computer programs are in-
capable of sentience. The latter group
believes that our human brains may
be unable to even imagine the kind of
intelligence we may be creating, and
so we must be very careful about the
kinds of programs we develop and the
kinds of moral, human values we put
into them. As Paul Ford explained in his
profile of artificial intelligence in the
MIT Technology Review in 2015: “We’re
basically telling a god how we’d like to
be treated.”[ 7]
There’s no way to be certain which
scenario we will ultimately face. How-
ever, that doesn’t mean we have to wait
around before we start adapting.
Indeed, by most accounts, we cannot
possibly afford to wait.
ADAM BENJAMIN is a freelance writer
living in Seattle.
Endnotes
[ 1] “Mirror Test”; Science Daily. Accessed
Jan. 30, 2017, at https://www.
sciencedaily.com/terms/mirror_test.
htm.
[ 2] “UK’s Skynet military satellite
launched”; BBC News; Dec. 29, 2012.
[ 3] “Should we be afraid of AI?” Aeon; May
9, 2016.
[ 4] “Why Deep Learning Is Suddenly
Changing Your Life”; Fortune; Sept. 28,
2016.
[ 5] “Skype Translator.” Accessed Jan. 20,
2017, at https://www.skype.com/en/
features/skype-translator/.
[ 6] “Musk: We need universal basic income
because robots will take all the jobs”;
Ars Technica UK; Nov. 7, 2016.
[ 7] “Our Fear of Artificial Intelligence”;
MI T Technology Review; Feb. 11, 2015.