You are on page 1of 7

SEGUNDO SEMINARIO DE LECTURA 2015

INTELIGENCIA ARTIFICIAL Y GERENCIA


Coordinador: Nelson Lara
Centro de Investigaciones Postdoctorales (CIPOST)
Postgrado en Ciencias Administrativas (PCA)
Universidad Central de Venezuela (UCV)
Lugar: Planta baja edificio Rodolfo Quintero
Horario: 7:00 A.M. a 9:00 A.M.

Despus del impacto positivo que se desprende del primer Seminario de Lectura 2015,
al utilizar uno de sus Artculos como referencia para introducir cambios en nuestros
Postgrados, que con toda seguridad nos ubicarn en el estado del arte, ahora vamos
a continuar dando pinceladas en el foco del estudio de la gerencia, como lo expresaba
Simon gerenciar es decidir, de all que nos hayamos centrado en los ltimos aos en
trabajar la Teora de la Decisin desde sus diferentes aproximaciones. Ahora, en el
segundo Seminario de Lectura 2015 retomaremos el tema de la Inteligencia Artificial,
teniendo como teln de fondo la gerencia.
Cuando hablamos de retomar es porque la IA es un tema que tuvo un auge en los 80,
luego un decaimiento y hoy en da con los avances tecnolgicos, especialmente en la
neurociencia, cobra un vigor acadmicamente importante. De all mis queridos amigos,
que como una tarea previa al inicio del Seminario el prximo lunes 28 de Septiembre,
tendremos que ponernos al da leyendo el Artculo publicado en 1993: A Research
Perspective: Artificial Intelligence, Management and Organizations, de Peter Duchessi
University at Albany, Albany, NY, USA, Robert O'Keefe Rensselaer Polytechnic Institute,
Troy, NY, USA y Daniel O'Leary University of Southern California, Los Angeles, CA,
USA, el cual se anexa.
Como es nuestra costumbre de muchos aos, el Seminario de Lectura se cumple en el
horario de 7:00 A.M. a 9:00 A.M. de manera estrictamente puntual, dirigido a
investigadores natos: profesores, estudiantes y ejecutivos. Los artculos estn en el
idioma ingls, lo cual es un plus en la construccin acadmica personal. El Seminario
se desarrollara en nueve sesiones, donde cada participante deber previamente leer,
estudiar, reflexionar y complementar el contenido del material correspondiente, el cual
ser debatido en anlisis por todos y cada uno de los participantes, de acuerdo al
siguiente programa:

PROGRAMA

1. 28-09-2015
Articial Intelligence and Consciousness (2007)
Drew McDermott
Yale University
Abstract: Consciousness is only marginally relevant to articial intelligence
(AI), because to most researchers in the eld other problems seem more
pressing. However, there have been proposals for how consciousness would
be accounted for in a complete computational theory of the mind, from
theorists such as Dennett, Hofstadter, McCarthy, McDermott, Minsky, Perlis,
Sloman, and Smith. One can extract from these speculations a sketch of a
theoretical synthesis, according to which consciousness is the property a
system has by virtue of modeling itself as having sensations and making free
decisions. Critics such as Harnad and Searle have not succeeded in
demolishing a priori this or any other computational theory, but no such theory
can be veried or refuted until and unless AI is successful in nding
computational solutions of dicult problems such as vision, language, and
locomotion.

2.

2. Artificial Intelligence and Human Thinking (2012)


Robert Kowalski
Imperial College London United Kingdom rak@doc.ic.ac.uk

Abstract
Research in AI has built upon the tools and techniques of many different disciplines,
including formal logic, probability theory, decision theory, management science,
linguistics and philosophy. However, the application of these disciplines in AI has
necessitated the development of many enhancements and extensions. Among the
most powerful of these are the methods of computational logic. I will argue that
computational logic, embedded in an agent cycle, combines and improves upon both
traditional logic and classical decision theory. I will also argue that many of its
methods can be used, not only in AI, but also in ordinary life, to help people improve

their own human intelligence without the assistance of computers.

3. 12-10-2015

Artificial Intelligence as a Positive and Negative Factor in Global Risk (2008)


Eliezer Yudkowsky
Machine Intelligence Research Institute
Yudkowsky,Eliezer.2008.Artificial Intelligence as a Positive and Negative Factor in
Global Risk. In Global Catastrophic Risks, edited by Nick Bostrom and Milan M.
irkovi, 308345. New York: Oxford University Press.

4. 19-10-2015

Human Intelligence Needs Articial Intelligence (2012)


Daniel S. Weld and Mausam Peng Dai
Dept of Computer Science and Engineering University of Washington Seattle, WA98195 {weld,mausam,daipeng}@cs.washington.edu

Abstract
Crowd sourcing platforms, such as Amazon Mechanical Turk, have enabled the
construction of scalable applications for tasks ranging from product categorization and
photo tagging to audio transcription and translation. These vertical applications are
typically realized with complex, self-managing workows that guarantee quality results.
But constructing such workows is challenging, with a huge number of alternative
decisions for the designer to consider. We argue the thesis that Articial intelligence

methods can greatly simplify the process of creating and managing complex
crowdsourced workows. We present the design of CLOWDER, which uses machine
learning to continually rene models of worker performance and task difculty. Using
these models, CLOWDER uses decision-theoretic optimizationto1) choose between
alternative workows, 2) optimize parameters for a workow, 3) create personalized
interfaces for individual workers, and 4) dynamically control the workow. Preliminary
experience suggests that these optimized workows are signicantly more economical
(and return higher quality output) than those generated by humans.

5. 26-10-2015

Aligning Superintelligence with Human Interests: A Technical Research Agenda


(2015)
Nate Soares and Benja Fallenstein
Machine Intelligence Research Institute {nate,benja}@intelligence.org

6. 02-11-2015
The Value Learning Problem (2015)
Nate Soares
Machine Intelligence Research Institute nate@intelligence.org
Abstract
A super intelligent machine would not automatically act as intended: it will act as
programmed, but the t between human intentions and formal specication could
be poor. We discuss methods by which a system could be constructed to learn
what to value. We highlight open problems specic to inductive value learning
(from labeled training data), and raise a number of questions about the
construction of systems which model the preferences of their operators and act

accordingly.

7. 09-11-2015
Research priorities for robust and benecial articial intelligence (2015)
The initial version of this document was drafted by Stuart Russell, Daniel Dewey
& Max Tegmark, with major input from Janos Kramar & Richard Mallah, and
reects valuable feedback from Anthony Aguirre, Erik Brynjolfsson, Ryan Calo,
Tom Dietterich, Dileep George, Bill Hibbard, Demis Hassabis, Eric Horvitz, Leslie
Pack Kaelbling, James Manyika, Luke Muehlhauser, Michael Osborne, David
Parkes, Heather Ro, Francesca Rossi, Bart Selman, Murray Shanahan, and
many others.
Executive Summary:
Success in the quest for articial intelligence has the potential to bring
unprecedented benets to humanity, and it is therefore worthwhile to research
how to maximize these benets while avoiding potential pitfalls. This document
gives numerous examples (which should by no means be construed as an
exhaustive list) of such worthwhile research aimed at ensuring that AI remains
robust and benecial.
8. 23-11-2015
Corrigibility (2015)
Nate Soares and Benja Fallenstein and Eliezer Yudkowsky
Machine Intelligence Research Institute {nate,benja,eliezer}@intelligence.org
Stuart Armstrong Future of Humanity Institute University of Oxford
stuart.armstrong@philosophy.ox.ac.uk
Abstract
As articially intelligent systems grow in intelligence and capability, some of their
available options may allow them to resist intervention by their programmers. We
call an AI system corrigible if it cooperates with what its creators regard as a
corrective intervention, despite default incentives for rational agents to resist
attempts to shut them down or modify their preferences. We introduce the notion
of corrigibility and analyze utility functions that attempt to make an agent shut
down safely if a shutdown button is pressed, while avoiding incentives to prevent
the button from being pressed or cause the button to be pressed, and while
ensuring propagation of the shutdown behavior as it creates new subsystems or
self-modies. While some proposals are interesting, none have yet been
demonstrated to satisfy all of our intuitive desiderata, leaving this simple problem

in corrigibility wide-open.
9. 30-11-2015
Future Progress in Artificial Intelligence: A Survey of Expert Opinion (2014)
Vincent C. Mller & Nick Bostrom
Future of Humanity Institute, Department of Philosophy & Oxford Martin School,
University of Oxford. b)Anatolia College/ACT, Thessaloniki
Mller, Vincent C. and Bostrom, Nick (forthcoming 2014), Future progress in
artificial intelligence: A Survey of Expert Opinion, in Vincent C. Mller (ed.),
Fundamental Issues of Artificial Intelligence (Synthese Library; Berlin: Springer).
Abstract: There is, in some quarters, concern about highlevel machine
intelligence and super intelligent AI coming up in a few decades, bringing with it
significant risks for humanity. In other quarters, these issues are ignored or
considered science fiction. We wanted to clarify what the distribution of opinions
actually is, what probability the best experts currently assign to highlevel
machine intelligence coming up within a particular timeframe, which risks they
see with that development, and how fast they see these developing. We thus
designed a brief questionnaire and distributed it to four groups of experts in
2012/2013. The median estimate of respondents was for a one in two chance
that high level machine intelligence will be developed around 2040-2050, rising to
a nine in ten chance by 2075. Experts expect that systems will move on to
superintelligence in less than 30 years thereafter. They estimate the chance is
about one in three that this development turns out to be bad or extremely bad
for humanity.
El Seminario de Lectura se realiza en el marco del Centro de Investigaciones
Postdoctorales (CIPOST- UCV) y del Postgrado en Ciencias Administrativas (PCA) de
FaCES-UCV.
Se entregar un Certificado a todos los participantes y adicionalmente se abrir la
oportunidad para que cada participante escriba un Artculo, el cual luego de ser
arbitrado positivamente, pasa a integrar una publicacin.

Atentamente,

Profesor Nelson Lara

You might also like