Publishers should adjust their catalogue data to suit different purposes, says BookEngine's Stuart Waterman—and title manipulation software can help
As a recent report
from the Copyright Centre showed, publishing recognises the increasing importance of metadata. Research from Nielsen
confirms the correlation between comprehensive metadata and increased sales, and publishers now have plenty of resources to help them achieve it—like two very useful blogs (here
) by EDItEUR’s Graham Bell. IPG members would do well to heed his advice.
It is worth adding that when it comes to making books more discoverable through metadata, one size does not necessarily fit all. That is to say: metadata associated with a particular title is unlikely to perfectly fit every channel through which that title is marketed and sold. Relying on one data feed for all the discrete microsites, imprint websites, promotions, social media and many other channels restricts a publisher’s marketing agility and limits the impressions it makes on potential buyers.
This is where the notion of a title manipulation engine comes in. This is software that sits between the canonical ONIX feed and consumer-facing content, giving a publisher the flexibility to distribute, augment and override its content at will. For example: let’s say a publisher wants to host a microsite for a book—but wishes to offer it at a different price than on other internet retailers, and add more metadata than is found in its current ONIX feed. Without a title manipulation engine, these variations would need to be input via CMS by staff, and when they need updating it would likewise need to be done manually. When the processes of overriding metadata are multiplied by many books, staff man hours soon add up. And what if the member of staff responsible for those jobs leaves? How and where is all this information recorded?
Of course, any title manipulation engine relies on having a high quality ONIX feed to begin with. But it allows our publisher in this example to create a separate, bespoke version of the feed that is suitable for this particular instance. When the data in the feed is updated, it is reflected on the microsite and anywhere else it is pulled in—but the canonical ONIX feed remains untouched. In this way, our publisher now effectively has a catalogue API it can use separately from its canonical ONIX feed.
manages these changes via an automated nightly ingestion of data that displays any amendments by the morning. That means staff can get on with stuff that isn’t the publishing equivalent of watching paint dry. We’ve all made mistakes when carrying out tedious and repetitive manual tasks, and metadata maintenance is no less prone to human error, especially when there’s so much of it to be handled. A title manipulation engine greatly reduces the possibility that consumer-facing web properties become a drain on publishers’ resources and brands. Many would agree that there are few aspects of publishing that are more boring, but simultaneously more important, than metadata—so any help to free up time to focus on the fun stuff has to be welcome.