2020-01-10 | Subject | What is the goal?
The issue is not AI itself. The issue is what the goal is. I now know what the basis for AI is in most cases. Consider schema.org, DBpedia and OBO Foundry. These are ontologies, collaborative efforts to establish meaning in a way that computer agents can navigate. On my own efforts, I am using this tech to more efficiently model systems. While I am using a minimum of collaborative knowledge for simplicity; I am aware of, create my tools so they are extensible to, and take advantage of the open and free ontological ecosystems. There is no need to create stuff that already exists.
Computer agents follow goals; yes, the same agents as The Matrix; here is the interesting thing: quite a few of the people who could understand and relate to what I am creating see it within the models they have. For instance, for highly successful people, networking is an important part of their life. Myself, I'm horrible at networking. I am guilty of reaching out only when I'm looking for work. That isn't networking. Now, I'm willing to help others in the other direction. I'll give references and give people a boost when they need it, but a constant world-view of networking for success is not really my thing. For that matter, success is not really my thing. My goal is to create something useful, something genuinely useful, something that will help us out of our clusterfuck. I am not accusing people who are good at networking of not caring about useful things. It is just an observation that my goals are different, and it makes the interpretation of models different. Different models of The Matrix makes The Matrix look different to agents, so we have to have common models and related goals, or the agents spin.
Boom! We are all agents. But humans are bio agents, and rinse/repeat for every single time we look at the world. Our language, background... all of these form different models. As I've described what I'm doing to a couple different very smart people, they latched on to the AI part of it. One person criticized it because computers can't really ever be conscious (Penrose's The Emperor's New Mind). One person even associated collapse with AI, as though my worry was along the lines of The Terminator. It isn't. My worry is more along these lines, which is simply multiple stressors with extremely complicated modern industrial civilization supply chains. I realize, now, that everybody has their own model(s), including myself. Roughly at the same time I read that human cognition can only deal with a couple of different actors working with a tool towards a goal. So, not only do humans box everything up in their own set of unique models to comprehend the world, there is really no alternative that is reasonable to expect. It isn't some kind of freaky bad genetic flaw humans have, it is simply a cognitive limit coupled with freaky good features.
OK. Here it is: humans can use ontologies, aided by computer agents, to establish common models of systems. We can do it now. Everything is open and published and growing. True, this is motivated by AI efforts (as well as health/medical), but almost every area of knowledge has been defined. This solves the problem of differing models. This is an interesting twist on The Terminator. It is exactly the mechanism of AI - what do things mean - that will help us quickly model systems in a way that is standard and the same. I don't care one bit if a car can drive itself, but I do appreciate the fact that this means a common model of transport in the world of streets exists. AI tech can help us in this regard.
Here is the catch: what is the goal? If the goal is more money in the next five years, then the way to get more money is to ignore longer term negative externalities. From this perspective, if we rely on computer agents to make business decisions, we will aggravate our own situation. Of course, the implications of the goal keep global warming under 2C is mind bogglingly horrible for our lives short term. It might be wise to look at that, but it is likely a political impossibility. At least we can model this stuff leveraging open, collaborative ontologies that exist, and come up with something better than our limited cognition can come up with without tools and agents. We can come up with reasonable goals within a common understanding of the world, leveraging the machinery that was primarily created to make more money as a goal. We can own that and plug in our new goals. We can own it all, really. Technically these things are necessarily open and free, at least for now.cognition handbook ouroboros unicorn