Earlier this month, the industry giant of creative software, Adobe, held it's annual Creativity Conference, Adobe MAX, in Las Vegas, Neveda. For up to five days, creatives from across the globe gathered (or streamed from home) to attend sessions on topics like photography, illustration, user experience, web design and much more. Conferences like Adobe MAX provide industry professionals with opportunities to network with each other and learn more about the future of the software they rely on. This year's event was arguably the most attention grabbing since Adobe's announcement of their shift to subscription based softwares plans in 2013.
Chief among the most headline-worthy news to come out of Adobe MAX this year was the company's introduction of an artificial intelligence and machine learning platform that will operate within many of Adobe's flagship products. It's called "Sensei" and it's poised to be a game-changing component in the future of creative software. According to Adobe:
...Sensei is the intelligence that will change the way you do business and the way customers experience your business. It uses artificial intelligence (AI), machine learning and deep learning to help you discover opportunities that are hidden, make tedious processes fast, and show you which data insights matter — and when they matter. Adobe Sensei draws from our massive volumes of content and data assets and our decades of experience in creativity, marketing, and document management. Simply put, Adobe Sensei helps your business work smarter and faster.
If you've paid attention to any major announcements this year from tech giants like Apple, Google or Samsung, this is likely not the first time you've heard the words "Machine Learning" mentioned on a stage. Put another way, if you were to take a shot of whisky each time the term was mentioned during Google's I/O conference, you'd probably pass out before the keynote finished.
Machine Learning and Artificial Intelligence are indeed buzzwords that have been circulating in the tech community for some time now. We've seen Google apply this technology in fascinating ways like in their Google Photos product, which recently added the ability to recognize the faces of user's pets. In an attempt to test the capabilities of this feature, I tagged a photo of my corgi, Chloe, when she was about 6 weeks old.
To my immense shock and awe, Google Photos then immediately surfaced every photo of Chloe that I've ever taken and uploaded throughout her lifetime. (Side note: I take a LOT of pictures of my dogs...we're talking triple digits, people.) Even more surprising was how frighteningly accurate the algorithm was at matching Chloe's name to her pictures. It didn't mistake her for any other dogs I had pictures of (don't ask - too many to count), it didn't stop with just pictures of her as an adoreable munchkin, and it didn't tag pictures of a loaf of Wonder Bread – all of which I would have completely understood! Nope! Rather, I received a jaw-dropping compendium of corgi pictures spanning the 5 years of her life for which the internet would surely melt.
If the Machine Learning technology Google has shown off recently is any indication of what Adobe has in store, designers like myself may begin wondering exactly how long we're going to be able to find work. We comfort ourselves by recalling the "human touch" we bring to our projects, the personal aesthetic an algorithm could never replicate. Or could it? After all, similar Google technology has already written music, drawn pictures, and classified images in astonishingly human-like ways. With Adobe's introduction of similar technology into the software I use to create every day, I can't help but wonder what the future of design will look like. While it's fun to speculate, none of us really knows how advanced algorithms will impact humanity over time. However, one thing does seem clear: the lines between art and technology are beginning to blur. Is it a cause for concern? Probably not...yet, but Adobe's recent announcements have piqued my interest.