Dismiss Notice
Guest, I have a big favor to ask you. We've been working very hard to establish ourselves on social media. If you like/follow our pages it would be a HUGE help to us. SoSH on Facebook and Inside the Pylon Thanks! Nip

Artificial Unintelligence

Discussion in 'BYTE ME: Technology discussion' started by TomTerrific, Apr 28, 2018.

  1. TomTerrific

    TomTerrific Member SoSH Member

    Has anyone read the new book by this title by Meredith Broussard?

    I'll be honest, and admit that I haven't, but I'm curious if anyone else has. I heard her on the Slate Money podcast, where she talked briefly about the unlikelihood and undesireability of purely automatic driving technology. Her arguments were extremely unconvincing, and not really coherent. That may have been a function of the format, however, and I'm guessing that her critique is much broader than just automated driving, so I'm wondering if anyone else has read her stuff and has an opinion.
  2. SoxJox

    SoxJox Member SoSH Member

    TT, no direct critique of your OP, but I'm sorry, I have to inject my own observations about what is being touted as "artificial intelligence", or, in this case, "automatic driving technology", as I think you are presenting it as an example. The term is being used widely and incorrectly. AI, as many seem to view re. driving technology, and as most people are applying it today, is nothing more than the application of boolean logic. That is NOT artificial intelligence. But that is what most people are "thinking" is happening re. "smart" driving technology.

    I had the great privilege of learning much (albeit very limited in a relative sense) from a...dare I say, genius. His name is John Norseen. Look him up. Unfortunately, he passed away from a brain aneurism at the ripe age of 54. He studied with the Russian Academy of Science, and Pavlov's Institute, among other august scientific bodies

    He would, tell you that AI is the ability to LEARN, not merely mimic.

    So, automatic driving technology is NOT AI. It is merely logic applied at some [hopefully] relatively high order.

    What it does not do is...LEARN. And that is what would be needed to make an automated driving technology work.
  3. SumnerH

    SumnerH Malt Liquor Picker Dope

    That is a highly idiosyncratic definition. Machine learning is one small part of AI, not used in all domains of the field. Expert systems, OCR, and dozens of other pattern-recognition tasks are usually considered within the rubric of AI.

    This hasn't been true for decades. Bayesian SLAM algorithms are often a prominent part of autonomous driving vehicles, but they're only one part. CMU was applying learning techniques to the driving problem when I was there in the mid-1990s (and many of those researchers have since moved on to Tesla, Uber, etc). Here's a paper from a couple years back identifying the use of inductive learning algorithms as one of the biggest challenges in testing autonomous driving solutions.

    Much research is focused on this, as well. For instance:

    Learning-based Lane Following and Changing Behaviors for Autonomous Vehicle: This thesis explores learning-based methods in generating human-like lane following and changing behaviors in on-road autonomous driving.​

    And a lot of the techniques are settled enough that they're not in the realm of research; there are full on courses on the subject: MIT 6.S094: Deep Learning for Self-Driving Cars

    All of this stuff is actively pursued and implemented in commercial vehicles, as well. See, e.g.:

    An Empirical Evaluation of Deep Learning on Highway Driving
    Deep Learning in Ford's Autonomous Vehicles
  4. TomTerrific

    TomTerrific Member SoSH Member

    Well, yet again we've digressed with warp speed. So of course I have something I want to add.

    I'm guessing that what Soxjox is referring to by "learning" is "learning rules/relationships not explicitly contemplated in the training set". Which would put it outside the deep learning paradigm, right? (BTW, I'm not an AI guy, so this observation may not apply to some of the other learning methods Sumner discussed, such as inductive learning.)

    Which leads me back to my original question. One of the more technical arguments Ms. Broussard made related to the fragility of the NNs resulting from deep learning in doing basic recognition tasks. (The example she cited was of a NN trained through deep learning to recognize stop signs, which is then totally foiled if glitter and pokemon stickers are applied to the sign.) However, her takeaway seemed to be the idea that, since our current most advanced methods are imperfect, these represent real limitations on the ultimate performance of automated recognition algorithms, and point to the ultimate futility of the automated driving endeavor.

    As I said earlier, not a very satisfying argument.
  5. SumnerH

    SumnerH Malt Liquor Picker Dope

    It depends on how you're training things and how you're defining things. One could envision a world where the training is (partially) focused on other vehicles and how they react to things, so that if everyone else is treating a glittery pokemon sign as a stop sign then the car could learn that even though it didn't recognize it itself. And if we all suddenly decided to use purple pentagons as stop signs, the car could learn that without being explicitly taught it (so long as there was some critical mass of informed drivers, either humans or newer autonomous cars, for it to learn from).

    Conversely in many cases the training is all a priori, which results in pickled learning (the system did learn at one point, but is frozen in time).

    Part of the learning-based lane changing/following paper is getting autonomous cars to emulate human ones in some ways, so as to be less surprising to those around them; depending on how far you go in this direction, you may be able to observationally learn new rules of the road that weren't anticipated. Of course, you may also give up some of the benefits of computer-controlled driving if you emulate humans too much.
  6. jercra

    jercra Member SoSH Member

    Autonomous driving is almost entirely ML, not AI. It's generally pretty simple statistical models and not inference based on continuously adaptive CNN models. The time required for real CNN inference is generally greater than real-time and for things like driving they need to be higher than real time. There's also very little in the way of real-time predictive algorithms that would allow a computer to decide if a human driver appears to be not paying attention and likely to blow through a light and tend to be reactive, as in "that object is in front of me and so I should stop". I think Tesla is doing great work and so are other, less famous, groups but in the end I think it's the people that mess things up more than the computers.

    I believe city centers will convert to "driverless" squares in order to truly bring on the rise of autonomous vehicles. Highways are easy. City centers are hard. Remove the people from the equation and you're on your way to low/no traffic, a reduction in overall vehicles, a near elimination of parking as an issue (thus essentially creating free additional travel lanes) and the creation of many acres of commercial real-estate previously dedicated to parking lots. It's win/win, with only the issue of how to transition being the hard problem.

    Either way, the focus on driverless cars is so far from the scary parts of AI. If people are interested, I'd love to engage on where the real engagement of AI to is to security and privacy. It's only the beginning and it's already both way beyond and way behind the common perception.

Share This Page