Thursday, April 4, 2013

Intelligent Content, Algorithm as Editor, and Poetry in Decline

After college, I was intent on becoming a full-time poet.  In that pursuit, I read for small cash poems at the American Legion, the VFW, Future Farmers of America, and Daughters of the American Revolution.  I usually selected anti-war poetry and for that reason I rarely got a return curtain.  I had a brief run of success with religious poetry during Holy Week until an Episcopal Church in Bethlehem, PA, objected to my “On the Medical Aspects of the Crucifixion,” which I considered balanced and in good taste.  Soon after I joined Rodale, I read a poem about male executives cheating on their wives while on business trips.  After a time, I found it easier selling advertising.

Poetry is like a skin rash that returns when the seasons change.  After months of chewing on big data and tracking delicious publishing workflows , I decided to take a course, from a psychological perspective, on Czech/German poet Rainier Maria Rilke, author of Duino Elegies and Sonnets to Orpheus, works that still have something to say about soul one hundred years after publication.

I read a few years ago that Steve Jobs was an inveterate reader of William Blake, the visionary 18th century English poet.  Many of Blake’s contemporaries considered him mad and that’s probably why he is still interesting and T.S. Eliot and Ezra Pound barely survive as academic footnotes.  Blake spoke of “Jesus the Imagination” and the importance of seeing the world through a Third Eye.  Maybe this is where the iPhone was hatched.

Colleagues I haven’t spoken to in decades sent me without apparent glee a recent WSJ piece by Joseph Epstein announcing yet again the end of poetry.  I can barely read my own tea leaves, though I suspect Epstein is right that academics had a lot to do with this decline.  T.S. Eliot trumpeted an aesthetic he referred to as the “objective correlative,” suggesting that every image in a poem has a recognizable referent in the physical world.  And the soul went out the window.  A school delightfully called New Criticism, centered in Sewanee, TN,, grew up in the 1920s around this notion and generations of tight, tidy, analyzable academic-friendly poems were written by poets, for poets and the New York Times Book Review.  I rode the train for many years before getting off at a station called Rilke.

It’s not particularly surprising that university English departments would adopt analytic tools consistent with the time as a way to show prospective students that this discipline had as much science as the psychology and sociology departments.  To aid in this effort, lots of academic journals sprang up in the post-WWII years and fed this hunger and provided university administrators with leverage.  You would either publish or perished.  I published a lot—and left.

I studied linguistic and generative grammar in college and understood early on that “language” was “chunky” and in a way self-generating.  I taught this in a Hollidaysburg, PA, English class and was reprimanded by my supervising teacher for coloring outside the lines.  But this is a long way from machine language reading me and updating content based on my quirks and browsing habits.  This is the premise of Roger Wood and Evelyn Robbrecht, of the Art+Data Institute, in a recent fascinating piece about Intelligent Content at PaidContent.   They write that “books and magazines of the future will act as sort of human computers translating your reading desires into pure machine language that tells the publisher how to present the material for faster and more pleasurable absorption.”

The authors observe that we are seeing the beginning of this development with Flipboard, where consumers can now create, curate and share their personalized magazines with perhaps a revenue-share component down-the-road.  They point out that Wordpress and Tumblr appear to be the closest thing to offering an always-on and continually updated reader experience through analytics.  This brave, not-so-new world “will be filled with mashups, video, audio, real-time updates, new navigation interfaces and even content that interacts with the reader’s environment,” such as Augmented Reality.

Freud declared one hundred years ago that we had to depend on a strong Superego—our civilization and culture--to hold back the dark forces of the unconscious.  Freud was buttoned-up, but did have a bit of the novelist in him and liked to draw large and startling figures.  With much less at stake, I have been listening for at least fifteen years to publishers, including yours truly, who trumpeted with a certain logic that the editor was the last defense against the wild forces of the web and social media that spits out unruly content from a stream-of-consciousness spigot invented by an increasingly mad James Joyce.

Hyperbole aside, this is a defensible position and a vital business posture where “branded content” is a powerful and necessary selling proposition, especially for magazines.  But what if our authors, our seers, are right in their view that the algorithm will replace the editor and curator: “Quick and automatic branding and positioning of the book or magazine on a glowing electric slab will become more important than the most  sage human editor.”

Content farms such as Demand Media, lambasted by publishers and downgraded by Google, might be only the first iteration of using data to develop article ideas.  Wood and Robbrecht go further and suggest big data from reading and search behavior will help predict what articles will likely rate high in terms of reader engagement.  Publishers also will be able to choose subject matter with an eye to ROI.  If true, this would be profoundly disruptive and interesting: software as the secret sauce.  And as the authors indicate, we have only to look at Quartz, Gravity, Contextly, and Sailthru to find evidence of companies that are developing tools to customize, personalize and update content on a device level.

A few years ago, I invited Demand Media into MPA to speak to editors and publishers about their use of search data to decide and shape articles for their various web sites.  The NY editors at this forum were understandably neither happy nor impressed with what they saw.  Demand Media was and is still, to some degree, a race to the bottom from any editorial perspective.  But that was yesterday.  What we are seeing today are analytics that can shape content to a user’s need and subtle inclinations.  This is neither far-fetched nor bottom-fishing.  On the contrary, it is consistent with everything publishers say about the importance of user engagement in terms of the lifetime value of the customer and the immediate P/L.  Intelligent Content can have a positive impact on the bottom line.

These days, we hear a lot about Responsive Design that enables web content to be distributed via templates to the various devices.  This is an important development, but has raised key business and advertising issues that need to be resolved.

It will likely be some time before we see the Algorithm atop the editorial masthead and Intelligent Content the rage.  But the authors present ideas for a content future that already exists in germ.  Writers and editors are becoming more sensitive to content use by marking up content with metadata at birth so it—and they-- will have more value downstream.  They will also have to become more sensitive to analytics and the role of algorithms in reading, serving, shaping and updating content based on what consumers want in real time.  This is why we call it rich media, the new augmented editorial reality.

No comments:

Post a Comment