The AI Disruption

By: James Perreault

Disruptive technologies don’t change what people want—they change how easily people can get it. Photography evolved from a specialized, skill-intensive craft into an everyday activity through digital cameras and AI-driven computational photography. While this lowered barriers and democratized image capture, it didn’t eliminate the need for artistry; professionals still thrive where nuance, control, and intentional creation matter.

The same pattern is now unfolding in software development. AI has dramatically reduced the barrier to building functional applications, making it accessible to non-experts. However, just as a smartphone camera doesn’t replace a skilled photographer, AI-generated software doesn’t replace experienced developers where complexity, quality, and purpose are critical. The true differentiator remains the creator—not the tool—and those who learn to wield new tools thoughtfully will continue to stand apart.

Disruptive technologies bring disruptive change for businesses. I’d like to take a deep dive into a recent example that my fellow Xennials lived through: photography.

When I was young, photos were almost exclusively a physical medium. You typically paid a service to develop your own photos. You didn’t need to have a darkroom or know about the chemistry of negative development, but you did need to purchase film and have the negatives developed and printed, usually as a 4×6 print. Because of the cost, we were judicious with what we photographed. If we had something important to capture, we hired a professional photographer.

Film photography was the principal way to capture memories. With that said, the method by which a visual memory is captured isn’t actually the goal for most people. Whether an image is drawn, painted, printed, or viewed on a screen, the end result is an image of a particular feeling or place or person in a moment in time. If you have the time and the talent of Michaelangelo, Van Gogh, or Bob Ross, a painting may be the best medium in which to capture the memory, idea, or emotion. Indeed, for thousands of years, paint was the leading method by which images were preserved. Then in the mid-1800’s, photography started to be a viable technology. Around 1900, the Kodak Brownie made photography affordable for almost everyone and by 2000, digital photography was entering the scene for consumers. Each new technology made capturing images easier and more accessible to more people.

Throughout these innovations, the impetus remained the same. People want to capture that which they feel is important to remember. Technological innovations enable, not create, this desire. Customers did not desire film specifically, they wanted to capture memories without paint. Customers didn’t drive the development of digital photography for its own sake, they wanted to take advantage of imaging without the ongoing costs and hassle of film, printing, and scanning. As digital photography took off, the medium in which we view and share photos likewise evolved from framed images to photo albums, to digital albums, to social media. Yet despite digital imaging supplanting the majority of the previous methods, paintings are still created, framed, and hung today. Photos are still captured on film, printed and framed. The older methods didn’t disappear, but they became more niche and specialized, accompanied by nostalgia and talk of superior highlight roll-off and the organic nature of grain.

The rise of digital photography was meteoric. The first digital cameras targeted for consumers were released in the late 1990s and by 2004, sales of digital cameras surpassed those of film cameras. Digital photography quality was good enough and cheap enough that customers were starting to prefer digital capture to film capture. That same year (2004), Apple started developing a new device that would eventually be released in 2007 as the iPhone. Just five years later, in 2012, Kodak would declare bankruptcy. In 2016 with the release of the iPhone 7, Apple’s multi-lens camera system and device ubiquity would make it the most common camera system in the world, less than 10 years after its introduction! (https://www.cnet.com/tech/mobile/iphone-top-camera-flickr-2017-report/) Companies that didn’t quickly adapt to digital were left behind.

The exodus from film may seem counterintuitive to those that have studied the limitations of light, optics and photography. The iPhone’s lens and sensor are small and physics accurately describes the limitations of optics and photography. (https://www.northlight-images.co.uk/downloadable_2/Physical_Limits.pdf). This paper describes how, in 2009, we were already approaching the theoretical maximum quality that can be achieved out of a camera sensor of given size. Anecdotally, we have a DSLR (a Canon EOS 6D) that was designed in 2012 that still takes much higher quality photos than even the latest smartphones in 2026. So why then did image quality in smartphones continue to improve significantly post 2009? Physics dictates that there is no way that a relatively small sensor in a phone can compete with the image quality of a large sensor found in large SLR cameras. It follows that one obvious means of improvement in the iPhone has been the size of the image sensor. The original iPhone’s image sensor measured around 3.5mm wide, the iPhone 7’s image sensor had a width of about 5mm while the iPhone 16’s image sensor was about 11mm wide. This still pales in comparison to the 35mm width of the 6D but is nonetheless a huge improvement in a small device. However, image sensor size alone is not enough to explain the dramatic improvement in image quality since it’s still relatively small and optically limited. The explanation lies in the iPhone 7’s inclusion of “computational photography” which uses multiple lenses and sophisticated software incorporating Machine Learning and AI to appear to overcome physics. This approach is what allowed the iPhone to produce photos that rival larger cameras and to become the de facto camera in most people’s lives.

As an aside, I want to apologize to my Android-carrying friends and family, some of whom are yelling at the screen about how their phone’s camera output is superior to the iPhone. I know you have wonderful cameras too, but for simplicity, I’m only referencing the iPhone. The two systems are effectively equal for the purposes of this story so I respectfully ask that you let it go for now.


All of the technological innovations are fascinating to me but we shouldn’t lose sight of the reason for all of this technological improvement. The average user doesn’t know or care that their iPhone is dancing around the laws of physics in order to deliver high quality photos of their dinner. The average user only cares about their goal, to capture that which they feel is important to remember. Recent technological innovations have lowered the barriers to doing so by a staggering amount. What once required specialized knowledge, skills and tools can now be accomplished by a device that’s always in your pocket. Or, at least that’s what most people believe.

The technology isn’t the full story. The aspect of photography that elevates it from simple memory capture to art is the artist. (A few of the greats are listed here if you’re interested: https://www.wardynskiphoto.com/gallery/the-best-photographers-of-all-time/.) As with all artists, the tools of the trade are less important than the skill the artist brings to the craft. Bob Ross can create a beautiful landscape with cheap paints and brushes. Jimi Hendrix or B.B. King would still produce fantastic music on an entry level guitar. And Ansel Adams could take a beautiful picture with a point-and-shoot camera. Each of these artists had specific paint, brushes, guitars, and cameras that they preferred, those that allowed them to maximize their creative vision, but even the best, most advanced tools do not create art without the artist. The artist is still the most critical variable in the process of creation. Having easy access to a high quality camera in your pocket does not guarantee that the photos captured will be of high quality too. Post-processing, Photoshop, and AI can all improve upon a photo, but they are limited by the source material.

One of the down-sides of this progress that accompanied the ubiquitous ability to capture high quality photos is a lowering of standards and expectations. The average consumer can not describe the differences between a professional photo shoot and a set of photos taken by their friend on an iPhone, and they don’t need to. Their goal is to capture what they feel is important to remember, and photos taken on an iPhone rekindle the memory as well as a photo taken with a medium format camera. The ease with which we can snap a photo combined with the high technical quality and sheer number we can capture (at almost no cost per picture) has lowered the value of photography’s output. Your average person places a higher value on immediacy and volume than on artistic or technical quality. This can create a difficult environment for a professional photographer since their higher quality output is valued less than it once was.

Why then are those bulky cameras still around and used by professionals? Today, it’s rare for non-professionals to take photos with a dedicated camera, if they even own one. The iPhone has become the only camera most people own or use. Computational Photography is amazing but physics dictates that it can only ever produce an approximation of what a higher quality camera can produce. Artists, professional photographers, art curators, and any other profession where control, nuance and fine detail matter still prefer the performance and output of large-sensor cameras. With that said, many professionals have integrated an iPhone into their workflow alongside their dedicated camera for rapid prototyping, rapid sharing or for non-critical shots. To ignore the capabilities and impact of the iPhone is to invite the end of a photographer’s career.

The key then, to surviving as a photographer in a world in which the iPhone exists, is to figure out the spaces and places where artistry, nuance, and fine detail are critical. Outside of the art world, businesses still rely on professionals for their image capture. Customers may not value the quality output of a professional for their daily memory capture, but expectations are different when they are purchasing an art print for their wall, when they visit an art gallery, or when perusing a catalog of products or viewing marketing material. There are still plenty of use cases where we expect and value, sometimes subconsciously, images that are created deliberately by a skilled individual with high quality tools.

Kristen is, in my admittedly biased opinion, capable of producing art with a camera. Her best work is created with her trusty 6D but her iPhone photos are also great. Despite having worked alongside her casually with a camera for decades, even today we can stand in the same spot and capture the same scene with the same camera and somehow, her photos are always obviously better than mine. Out of 10 shots, I am lucky to get one worth keeping compared to hers. It’s good, then, that she’s the professional photographer and not I. When I’m not admiring and supporting Kristen at https://kpcreates.com, my day job, passion, and profession lies in software development.

What do I take from the above indulgence into the history of photography and its disruptive technologies that can be applied to the software development industry? I think it’s important to first acknowledge that software development is a form of creation, a form of art. Film photography started out requiring specialized knowledge of chemistry to develop and fix negatives and evolved into a service where undeveloped negatives could be sent off and returned as printed photos. It further evolved into a form where the entire workflow could take place on a single device, from capture to display in fractions of a second. People no longer need to know how to take a photo. If they want to capture a memory, they pull out their phone, point, tap, and immediately see what was captured. But even with the ease of capture, producing a gallery-quality photo still requires knowledge and skill beyond the average person’s abilities.

Software development tools have evolved from punchcards to assembly to compiled code to intermediate and interpreted code. The training and skills required to create applications has likewise evolved from requiring multiple years of experience in mathematics, algorithms, architecture, and graphic design to asking one AI to develop an efficient prompt for another AI that builds an app for you, complete with database, API’s and a polished UI. Don’t worry, I won’t bore you with a recounting of this industry too.

Like with photography’s iPhone, the barrier to create everyday applications has been lowered by AI to something most people can do in a few minutes or hours. The final product will appear for most people to have similar quality to a well-designed application. The applications generated by consumers using AI will be perfectly sufficient for personal and small scale use. However, there are still places where the specialized talents and knowledge of experienced software developers, testers, analysts, graphic designers, infrastructure and operations specialists are required to take the application to the necessary level.

The tool that creates the application isn’t the end goal. Software applications are created to serve a need for people and we need to keep that purpose in sight as we necessarily embrace and integrate these new powerful tools. Learning how to use these new tools and choosing the right tool for a given problem is where the great artists will separate themselves.