The New York Times recently published an opinion piece by documentarian Frank Pavich about alleged production stills from a “Tron” film made by surrealist filmmaker Alejandro Jodorowsky. Pavich, who interviewed Jodorowsky extensively for his documentary Jodorowsky’s Dune, was stunned that this Tron film had never once come up in the more than two years he spent interviewing the Chilean filmmaker. How could that be? The answer was deceptively simple: because Jodorowsky never made a version of Tron. The images that fooled Pavich were created using the AI software Midjourney by Canadian Director & Artist Johnny Darrell.
The following is an interview with Johnny Darrell. The interview has been lightly edited for clarity.
—MCF
Johnny, thank you so much for taking the time to let me interview you. For those that aren’t familiar with your work, you’re an accomplished creative with an impressive resume; writing, directing, designing and producing work for some of the biggest names in the industry, including Disney, Netflix and the Cartoon Network. Interestingly, what seems to have put you in the public eye wasn’t one of your many professional projects, but rather a personal project, namely using AI software to create still images from films that were never made.
I’m wondering if you can walk me through your creative arc, the work you’ve done previously, and how that led you to use AI in this way?
I started off in the “industry” directing low budget music videos for local (Vancouver) indie bands. Creatively it was a lot of fun but back then (early/mid 90s) the costs of making a video were huge because we would have to hire expensive professional post-imaging services (online editing) so there was never any money left over after the videos were done. I was broke, but then one of the first 3D CGI animation studios (if not the first) called Mainframe Entertainment opened in Vancouver and they needed editors and I somehow found myself becoming one. From there I became a director and I’ve been directing ever since on a variety of different projects from action/adventure, sci-fi and fantasy, pre-school and now (finally) some adult. The one thing that all these genres have in common is storytelling and directing hundreds of episodes in all these different genres has allowed my brain to play all kinds of roles.
The thing that’s ridiculous is I’m a terrible illustrator and I can’t animate whatsoever so I definitely have imposter syndrome, but somehow I make it work. I’ve been working remotely now for a number of years (even before Covid) so a lot of times my direction comes in the form of text-based direction to my crew. I need to clearly state what I’m looking for with text only (unless I’m in a zoom call). Again, if I could draw, my notes would be less wordy and more visual. I also do graphic design, more as a hobby and a way to play around creatively for myself. The nice thing about graphic design is that it’s just me. No producer notes, no client notes, no rules, no deadlines, no compromise etc. so I can just play and experiment etc..
A couple of years ago, I downloaded an app called Wombo Dream which created AI images based on texts and pre-trained models created by the developers. The results were immediately alluring to me. Abstract and crazy images akin to fever dreams and I started creating book covers for fun; prompting ideas that were related to some classic books and some of my favourite books. I’d then take these images into Photoshop and create the typography for the titles and author’s names. Then in 2022 Midjourney came along and I thought I’d check it out. I was immediately addicted.
A.I. Book Covers by Johnny Darrell
The images that were coming out of Midjourney were leaps and bounds more robust than Wombo. I also started fooling around with Dall-E and found that it was even better at some things than Midjourney was, but for my wallet, I needed to pick one and Midjourney was it.
The book covers are great, there’s something especially poetic in seeing Frankenstein’s Monster abstractly re-constructed from trained image data. Very meta.
YEAH! A great parallel; The Frankenstein cover was one of the first I created because I felt the app was creating monsters; taking parts and building something beautifully grotesque.
There really is something beautifully grotesque about a lot of the early AI art, the first images I saw from DALL-E, especially with human figures, were unnerving and, yet, completely captivating.
You brought up a number of things I’d like to dig into, but first I’d really like to know how you conceive of your role in relation to this work, and if it’s shifted your own self-conception, creatively speaking. Are you the creator? The curator? The summoner? Is this technology a tool? Or is the software also a participant?
I’m certainly no Frankenstein; I think the brainiacs behind the building of the AI software play the role of Frankenstein. All images available to the software become the parts that make up Frankenstein’s Monster so that must mean I’m Igor.
This whole “who is the artist?” behind AI “art” is a tough nut to crack. I’d like to think I’m being an artist when I receive the images the program develops, but am I? I don’t know. I keep aligning it with photography. When photography first started up, a lot of traditional painters looked down upon the medium because the person was just “pushing a button.”
But a camera can’t just make its way to a serene location and choose a desired film stock. It can’t pivot on a dime in reaction to an unexpected change in a subject’s position to reframe itself to get a better composition. It’s safe to say that the person behind the lens operating the machine is the artist, not the Pentax. And then on top of that there was the skill, science and artistry it took to bring those photos to life in a darkroom.
As photography became more and more accepted and the people pushing that button were capturing amazing new images and slowly being recognized as legitimate artists, a lot of people then thought, “Well then, I guess It’s the death of paintings!” and “Painters need to find a new career!” The parallels to AI are completely similar with people in my industry saying similar things; “AI isn’t art!” or “I guess I should look for a new career.” I tell these designers, “Relax! AI isn’t going to steal your job. But an artist who knows how to use AI will.”
When desktop publishing first started coming around, my friend was the art director for Adbusters Magazine. I’d go and visit him and be amazed at what he could do with this crazy new program called Photoshop. I needed to do what he was doing so I bought a Power Macintosh 6200 and some pirated copy of Photoshop 2.0 (before layers) and learned how to push pixels around. Friends in the fine art community poo-poo’d digital art, digital layout and design because it wasn’t art. My friends and I were “just pushing buttons.”
So here we are, 30 years later and I guess I’m still just pushing buttons. Like Igor.
But perhaps I’m wrong about who Frankenstein is? Maybe with Midjourney and other AI programs, the synthographer or promptographer (or whatever term’s going to stick) has to take on all three roles; Frankenstein, Frankenstein’s Monster and Igor; working in tandem with the Tesla coils and lightning bolts and brain-caps that make up the lab. Maybe Midjourney is the lab that magically built itself in Frankenstein’s dungeon so Frankenstein could play and build and create?
It really is a puzzle, isn’t it? And, as you pointed out, just like the debate around the introduction of photography, it’s forcing us to expand and update our definitions.
Speaking of definitions, I hadn’t heard the term “synthography” before, did you coin it? Is this the same thing as prompt engineering? Or do you consider synthography to be a subset of prompt engineering, specifically related to art?
I didn’t coin “synthography” but it was the first term I heard for this AI text-to-imagine work shortly after the New York Times’ “This Film Does Not Exist” article came out which featured my Jodorowsky Tron images. Promptographer is a term I heard maybe a month ago in an article and I thought that term also works. Prompt engineering works too but it feels like that gets more into the guttyworks of the coding behind AI text-to-imaging. I think “promptography” makes the most sense but “synthography” sounds more haunted and mysterious to me.
Agreed, “synthography” feels more aligned with the mysterious, almost magical nature of the technology. And if it implies wizardry, then you’ve definitely earned the title of Synthographer— how did it feel to learn you were getting plugged in the New York Times? Had you had any previous contact with Frank Pavich, who penned the piece?
Yeah, that was unexpectedly awesome, and put a small pep in my step that day! The Graphics Director of the opinion piece, Jeremy Ashkenas, emailed me asking if I would like to collaborate with Frank Pavich on an opinion piece. I really liked the documentary Jodorowsky’s Dune, so I was honoured to meet Frank. We had a couple of zoom calls with each other and shot the shit about all sorts of things AI and film related etc.. Not that the two of us had that much background experience or extensive knowledge (I still don’t) in the AI field so we were both kind of spellbound and speculating about this globally significant paradigm shift. I think we both think we’re in trouble. I hope not. I think we need to pump the brakes on the release of this revolutionary technology. The likelihood of it being dangerous is on par with how beneficial it will be. I don’t like those odds. We have an opportunity right now to put some of the best intelligent minds together from all relevant fields and industries and places and figure out how to feed it into society in a safe and effective manner. Car crash dummy this sucker like we do with vehicle crash tests and drug trials.
What a cool connection to make, that’s great that you guys had an opportunity to talk shop and dive into some of the larger issues. I’d like to come back to that, but first, I have to know, did Frank give you any clues as to whether Alejandro had seen any of the Tron images?
Yeah! Frank said that when we do meet we should go get matching BFF neck tattoos. Frank finally shared the images with Alejandro who said that he thought the images were “neato mosquito.” I don’t know if he actually used those words but I bet he thought them. And then shortly after, Taika Watiti (who’s working on a film adaptation of Alejandro’s graphic novel The Incal, which is sort of an off-shoot story based on all the discarded Dune designs) started sharing the images. I believe he said something to the likes of, “I would love a computer to do my job for me.”
Wow… Pavich, Jodorowsky and Watiti. To have such influential directors complimenting and promoting your images must feel amazing, and probably somewhat surreal. Personally, I’d love to see a Tron film as you’ve imagined it, I’m sure a lot of people would, maybe one of these three will run with it.
If it were to get expanded into a movie, would you want to be involved? And to Taika’s point, how much of a role do you think AI should play in this hypothetical Tron reboot? Taken to its logical extreme, if the entire feature-length film could be reimagined fully in the style of Alejandro Jodorowsky, conjured instantaneously from a single text prompt given to an AI, what would you think about that?
As it stands right now, for professional studio films, I think AI should and could only play a supporting role. I think human designers in all departments could benefit from having AI assist in some ideas and concepts but eventually these ideas would need to be wrangled and bare-foot wrastled with to maintain a cohesive look based upon the aesthetic goals of the production designer, art director, director, etc.. Prop masters, costume designers, set designers, creature designers, vehicle designers could use AI to inspire them and explore and to find unexpected ideas, but their final design should be done by human hand. AI is good at giving you the cool and unexpected but it’s really difficult (at least for me) to get something exactly right as well as maintain absolute consistency and cohesiveness. A production has its own set of design rules and it’s up to the art directors and production designers to make sure that their design rules are being implemented. A costume designer might use AI to explore hundreds of helmet or weapon ideas and then cherry pick the best 10 ideas then rework them properly to follow those rules.
So IF Jodorowsky’s Tron were to be made, I would hope that some of the images act as springboards for actual craftspeople to expand upon and I’d be more than happy to help act as a muse for talented designers.
Okay, fingers crossed, one of them gets it greenlit and you get tapped for art direction.
I like the image of you barefoot wrestling the AI to produce a consistent product, there’s something really relatable about that. Like, we’ve all had our “I will FIGHT you” moments with various computer programs.
Which is part of what is so impressive about your Tron images, they are remarkably consistent in style, mood, and even vintage. Many of the characters appear to be convincingly of that era, not only in costume, but also because they actually seem to have been shot on 35mm film.
I’m thinking of this image in particular:
Pavich, in his opinion piece, gives the impression that your process was essentially typing slight variations of “Tron in the style of Jodorowsky” into Midjourney and then sifting through the results. That feels like a gross oversimplification of your process. Take us into the wrestling ring— what did that process actually look like? What was typical? Which images were the outliers? What was similar and what was different about coaxing each of these images out of the AI?
I was inspired by another synthographer’s vision of Star Wars in 1900 and I thought I too wanted to create something like that but there’s far too much Star Wars out there. Almost everyone does some kind of Star Wars thing and it drives me bonkers. So I thought of doing Tron in the style of Fritz Lang. The prompt:
Production still from 1919 photo of the movie TRON as a silent black and white film
I usually start with as minimal a prompt as I can at first just to see what kind of results I get. In this case, I wasn’t buying it. It didn’t look convincing or authentic so I tried a variation on the prompt:
Production still from TRON as a vintage turn of the century silent black and white film
And got this:
Still not convinced and at this point I kind of want to just move on and experiment and if need be, I can come back to these original prompts and push them around a bit more. So I thought, what about “Cabinet of Dr. Caligari?” So I entered this prompt:
Production still from TRON as a vintage turn of the century silent black and white film in the style of The Cabinet of Dr. Caligari
I liked the surrealism and thought I should give it colour and in keeping with the surrealism I chose Jodorowsky and moved it to the 70s.
Production still from 1976 of Alejandro Jodorowsky's TRON
My jaw dropped and I knew I wanted to expand on this. The bottom left image (#3) especially looked somewhat “authentic.” So I did a re-roll of the same prompt:
Again, #3 made my eyes pop out of my head and fall onto my laptop keyboard. I put them back into my head and changed the prompt slightly to
Production still from 1976 of Alejandro Jodorowsky's TRON, light cycles
I then just did about 10 more re-rolls with minor word changes. For a whim I changed Jodorowsky to Stanley Kubrick and got this:
And then Wes Anderson:
But I decided to go back to Jodorowsky, which felt more “authentic” to me.
This was all in December of ‘22 and I don’t know if there was anything different with that version of Midjourney, and maybe it was just superstition as well as a real lack of knowledge of how the software works, but a while back, I had started to put some really simple camera terms into some earlier prompts and added them to my original prompt:
Production still from 1976 of Alejandro Jodorowsky's TRON, 20 ASA 35mm kodachrome photo
Maybe it's a placebo effect but I felt like these terms were contributing to it becoming more cinematically authentic. Then it was just changing the environments, the props etc. So yeah, it wasn’t a complex prompt and I kept my shoes and socks on and didn’t have to wrestle it. But I also wasn’t trying to build a narrative. I was just creating images and exploring. When creating a narrative or a story it becomes much harder and complex, at least for me. I’m too much of a dummy to figure out Stable Diffusion and building models and all that mumbo jumbo.
Oh interesting, so Pavich was actually pretty close to the mark in his description. Regardless, it is fascinating to see your process of discovery and how you guided, rather than grappled, the software toward the final image set.
I think you intuited something critical in adding the film specs, it’s the difference between creating something cool and generating images that fooled your colleague. In a sense, Jodorowsky’s Tron passed the Turing test.
Coming back to the notion of developing characters and creating narrative arcs, it does seem like the natural progression of this software would be to someday achieve a level of total consistency and repeatability. Absent that, people are finding clever ways to finesse some visual coherence out of the software with techniques like “seeding.” I’m curious to hear more about your process —when the socks do come off— for wrangling serial imagery out of Midjourney. Can you walk us through an example?
I tried seeds in V3 (I think) and I couldn’t get it to really work and because I’m a dummy and lazy I kind of just gave up on it quickly. I’ve come back to it again here and there to try and figure it out, watched Youtube tutorials etc., but it just didn’t seem to click with me. I’ll try giving it a go again in the near future I’m sure.
With Snakes Are The Devil, it was more of a concept narrative with a gang of greaser 60s motorcycle punks and so the characters are all just generic leather-clad bad-boys. There’s no stand-out character; just nameless MC greasers who discover an alien entity and soon they’re all transforming into monsters. I wanted that cheap 60s b-movie look and wanted it to be somewhat authentic to how a producer/director may have put something like this together with a small budget. I figured they would have used crappy costumes and masks from a backlot of a variety of wardrobe departments and stock footage for the final conflict with the military so I changed up the prompting to be more black and white 50’s military footage. So all of that was just typical prompting. There were some really cool images that didn’t get used because this fictitious production company’s budget wouldn’t have been able to afford to build such a thing.
The real challenge was getting specific camera angles. I wanted a shot where the camera was in the crater looking up at the gang as they were looking down into the crater. I don’t know how many times I tried, I just could not get the shot. I even went against my personal code of never using an image prompt (something I use more now) and plugged some images in to help it and it just made things worse so I just abandoned the shot idea. I think now, in v5.1 I could probably get the shot.
Another thing I wanted to do was try to get some of the gang members on their bikes in front of a rear-projection screen to get that classic old-movie trope of driving-in-front-of-a-screen look. That just made MJ really confused.
For a brief moment I even considered Photoshop, but again, my personal code for these things at the time was to just use “pure” MJ exports. No photobashing, no colour correction, no Midj-digit or hand repair, etc.. I allowed myself the use of PS for the typography and layout of the movie posters and lobby cards though. Now, every once in a while, I’ll clean something up if it’s ruining a good result. It’s usually erasing a stray ding-dong idiot walking into frame and ruining a shot…
That’s too funny, it’s like an extra during his first day on set. I’d also be inclined to unleash the Remove Tool on that guy. And then call Central Casting to complain.
The process you described in storyboarding Snakes Are The Devil really highlights some of the biggest challenges with creating images with this technology. It’s random, it can be weirdly literal, and it will outright ignore certain requests while botching others entirely. Given that it can be such a battle, I’m curious why your first inclination is to keep the medium as pure as possible. What’s interesting or worthwhile about keeping the image generation contained to Midjourney?
I’m not certain! I guess in some stupid manner I just thought it was cheating. That’s not the word. We’re in the infancy of this new program that is absolutely mind-blowing. The technology and its results are incredible and I guess I just wanted to be able to show off its capabilities without adding another layer of smoke and mirrors to it. I didn’t want to explain to people what parts are AI, which parts were photobashed or manipulated. I liked that it couldn’t get hands right; it was like the smoking gun that it’s not real and I guess for some reason I liked people seeing pure unfiltered, unedited results. I wanted a historical and accurate record of Midjourney’s abilities I guess. Pure AI, no tricks or manipulation (with the exception of turfing that occasional wandering extra; which I have to admit wasn’t too often. I could count the number of times I did that on one hand. Human hand that is. If you used Midjourney to count digits on one hand you might end up with eight or ten. Midj-digits.
Also, real quick, you used a term I’m not familiar with– Midj-digit. What’s that?
Something stupid I just made up to describe hands made by Midjourney
Midj-digits— I like how much surface area this technology creates to map new language onto. In a few years time I imagine there will be an entire lexicon of AI specific terminology.
I can very much relate to wanting to keep the results pure, and because this is the infancy, it really is a special, unrepeatable moment. As with any new gadget, but probably even more so in this case, the urge to want to really push it, to test the limits and see what it’s capable of seems only natural. Like you said, Photoshop, while not exactly cheating, sort of muddies the water; we already know what it can do.
Along with all these new capabilities come considerations, some new, some old. In his opinion piece, Pavich posed several questions, I’m curious how you’d respond to them:
“To what extent do these rapidly generated images contain creativity? And from what source is that creativity emerging? Has Alejandro been robbed? Is the training of this A.I. model the greatest art heist in history? How much of art-making is theft, anyway?”
We’ve already discussed the first two questions somewhat, but maybe you could address the last three.
Was Alejandro robbed? No. I don’t think so.
If I had any talent in illustrating, I could have created illustrated renderings of Jodorowsky and Tron-inspired images, post them online as fan art. And I think the results could look cool (because come on, it’s Jodorowsky and Tron!), and I bet many people would say “I wish this were a movie!”
If I had the budget and the time to pay talented craftspeople to create designs, costumes, sets, models, studio equipment, hire a professional photographer for a photoshoot of these models in costumes on elaborate sets and post them online as fan art. And I think the results could look cool (because come on, it’s Jodorowsky and Tron!), and I bet many people would say “I wish this were a movie!”
It would most likely have a different look than what Midjourney spit out, but there’d be shades of similarity because my designers would be drawing upon a vast supply of reference materials to create things for my photoshoot. But who’s got the time and money to do something like this?!
But Midge can do this quickly and cheap and I don’t think it’s doing anything different than my theoretical designers and craftspeople that I’ve theoretically hired to create my theoretical photoshoot.
Midge was able to, in a matter of minutes, put my thoughts to screen. And then I posted them online as “fan” art. And I think the results looked cool (because come on, it’s Jodorowsky and Tron!), and many people said “I wish this were a movie!”
AI isn’t doing anything differently than any traditional designer who has a vast library of reference material and books? I have a bunch of reference books showcasing 70s & 80s punk flyers, punk album covers, exploitation movie poster art,, 90s Swiss graphic design books, old 60s British hard-cover comic volumes. I used to draw inspiration from these. Then the internet came and it was like owning the world’s biggest design library. Anything I wanted I could basically find and be inspired by or use directly for my personal designs and work. These AI imaging programs are doing the exact same thing (maybe…I honestly don’t know how these programs work).
What about comic conventions where you’ll find “unsanctioned” artists sitting at their booth selling their fan art of popular characters? Are they stealing money away from IP owners? Technically, owners of all these properties could slap these artists with cease and desist letters but it’s rare because most of these pieces celebrate their property and just sort of help bring positive attention to their beloved characters. It’s probably more common for copyright owners to take legal action when their IP is being used in a way that goes against a brand’s values.
And of course if a fan-artist emulated Mike Mignola’s art style and passed it off as an original Mignola, that’s blatant theft/fraud. I think an artist has every right to draw in the style of Mignola should they wish to, but it should be clear that it’s just fan-art/emulation.
As for the software itself? If AI imaging coders/companies have knowingly and purposely used specific people’s art to train their software, then I think those artists should be compensated. Not unlike music sampling. In the early days of sampling, it was kind of a wild west and samples seemed to be free-for-the-taking, but it didn’t take long until people were saying, “Hey that’s my art, pay me.” Beastie Boys’ Paul’s Boutique was the game-changer in terms of how creative an artist could get by taking other people’s art and making it new. But they had to pay for it in the tune of about $250,000. If Paul’s Boutique were to be produced today, it would now cost them around $20,000,000, and that’s not just because of inflation but because of how the business now works in regards to using samples. And so I think an AI software’s coder has the responsibility of compensating artists who directly contribute to the software’s capabilities and that company’s success.
I think you make an important distinction between influence and forgery. Using Midjourney to create “new” work of an established artist and trying to profit from it would clearly be unethical. But what if no there’s no profit motive, how does that change the ethical calculus? I’m thinking specifically of the debate around Keith Schofield’s David Cronenberg’s “Galaxy of the Flesh.” For any readers who might not be familiar, Keith Schofield is a TV commercial director who came under fire for posting images of a supposed Cronenberg film without any indication that they were fake.
Forgery or fan-art, what’s your take? Do fan-art AI images need to be clearly labeled as AI? Do AI images always need to be labeled as AI, regardless of the context?
I was unfamiliar with this Galaxy of the Flesh debate. This fan-art looks cool to me and it’s no different than my Jodorowsky’s Tron or Pong or Magic 8 Ball. It’s like fan-made trailers for fictitious movies like Thundercats. It might fool people at first but it doesn’t take long or a little google search to figure out that it’s not real. But that’s entertainment and no one gets hurt and there’s no big consequences for these; just a little disappointment maybe.
Where it gets serious of course is when deep fake gets used to skew the truth. It’s all fun and innocent when someone puts Stallone’s face on the Terminator, or Schwarzenegger's face on Bill Hader’s face when he does his vocal impersonation, but this is going to get seriously dangerous when it comes to politics.
If AI voice generation technology was as good or more publicly consumable in 2016 when the audio recording of Trump’s “grab ‘em by the pussy” came out, he would have just said, “Nope. Not me. Deep fake. I know more about deep fake technology than anyone so believe me, that’s clearly a deep fake from China.” and it would be a very hard thing to know whether he was lying or not. Even for myself, who detests the orange turd-gobbler, I would have to think, “it coooould be AI.” The upcoming 2024 presidential election is going to be insane with deep fake footage, and candidates are going to have to spend a lot of time needing to disprove smear campaigns.
There’s an interesting British series called The Capture which is about a detective investigating a kidnapping/muder. This leads her to discovering an underground activist group that uses deep fake technology to sway public opinion. There’s a part of the series where the UK's Counter Terrorist agency uses deep fake to help create the proof they need to convict some men involved in a terrorist attack. They know for a fact that these terrorists were responsible, but to a jury, there’s no hard evidence, so the government uses deep fake evidence, called “Correction” to create the evidence needed to sway the jury into finding the terrorists responsible. It’s slightly sensational of course, but it’s good commentary on where we’re heading.
As for labeling AI media? I don’t know. Dall-e has the small strip of colour bars in the bottom corner and Runway videos have three-small animated circles to help brand their media but these are easy to hide or crop. I’m not technically savvy enough to know how things work under the surface of machines. Maybe there’s already some invisible metadata that can be easily analyzed if a brainiac got in under the hood? Ultimately, if news outlets broadcast footage or images that are known to be created in A.I., I think it’s their responsiblity to label it with an on-screen watermark or chyron. It’s probably not enough for a news anchor to verbally state “what you’re about to see is not real.” before or after broadcasting it.
The distinction between fooling the masses in Art and fooling the masses in Politics feels like an important one. I’m reminded of Natalie Portman’s line in V for Vendetta, “Artists use lies to tell the truth, while politicians use them to cover the truth up.”
You mentioned earlier that you and Pavich shared a lot of concern about the pace and direction of the technology, warranting the kind of pause that tech luminaries like Steve Wozniak have called for. Are you specifically concerned about the disruption that will be caused by deepfakes? Or are your concerns more broad? Where do you see this going?
Deepfakes are certainly going to be problematic, but as a whole, I think AI needs to be seriously put on hold for a bit. If you haven’t seen Tristan Harris and Aza Raskin’s talk about why it’s important that we cool our jets on the progress of AI, then I really think you should. It’s a lengthy talk at just over an hour, but it’s super important. These guys are actually extremely smart so you should listen to them and stop listening to me blabber.
Wow, I’ve heard Raskin and Harris speak before, and I’ve seen the Social Dilemma, but I hadn’t heard them break it down quite like that. That was pretty alarming. I thought their classification of AI as a kind of golem was especially useful (and clever) as a concept. They make a compelling case for a coordinated, collective response.
Yeah. I think it's super important that we heed this warning and follow their advice. I basically use one program as a hobby so my knowledge of the greater impacts of AI on society is beyond my skill set so I’ll take Harris and Raskin’s word that if something needs to be done before it’s too late, then yeah, something needs to be done.
Johnny, I’m very interested in what you have to say on the topic. Is there anything that Harris and Raskin aren’t seeing or naming?
Nope!
I do also want to hear the flipside, what are the positives? You’ve already described the extent to which this technology has unlocked capabilities and results that would formerly be too resource-heavy, too expensive, to even attempt to undertake. What else do you think this will unlock? You’ve gotten a glimpse of the future, what do you see?
If you think there’s too much entertainment content online now, just wait. It’s about to become completely nuts. And from now on when we watch a TV show or movie and the bossman says, “Zoom in on that and enhance!” It’s no longer science fiction.
Haha, you’re right, that trope will finally be true. CSI: Miami for the win.
Johnny, I really appreciate you taking the time for this interview, it’s been very informative, thank you so much!