“Product Literature Database” (PLD) … what the heck is this?

zur deutschen VersionPharmaceutical companies have an inevitable need for regular if not permanent analysis of literature published on their products. They are not only legally obligated with regards to pharmacovigilance (e.g. processing of any undesirable drug effect reported somewhere) and medical information (e.g. answering product inquires by practitioners and pharmacists). Beyond that, product reports within the scientific literature are a full treasure of real-world product behavior findings which support marketing, competitor intelligence, product innovation, and more.

But we are talking about giant pool of millions of publications within thousands of scientific journals which is growing every single day. Ad-hoc searching and analyzing costs reasonable amounts of money:

  1. Professional literature databases are quite expensive with a market dominated by few providers only, which sometimes behave like monopolists.
    (And to make an early clear statement: no, PubMed clearly does not qualify, especially due to its lack of comprehensiveness and poorly finished content.)
  2. It is very time-consuming to find precise and significant answers by searching and analyzing scientific literature. And the cheaper the source, the more (expensive) human workload is needed.

 

The solution

So-called “product literature databases” (PLDs) or “corporate literature databases” deliver the knowledge about what has been published about your own products … much more efficient than highly redundant and multiplied individual ad-hoc literature searches.

PLDs are sort of subsets of the worldwide literature, including only publications which mention at least one product of the company. Typically, they are filled by automatic search agents (search profiles) or by feeds delivered by database providers.

Well-designed PLDs also provide mechanisms for rating publications, annotating information and signaling predefined events.

 

External PLD providers

UK-based Pi2 solutions Ltd. is an established vendor for customized pharma and biotech PLD solutions and was acquired by ProQuest Dialog this summer. Pi2 traditionally supports Pfizer, and since 2009/2010 also Wyeth (who had worked with OvidSP® for ad-hoc literature research before). A 2013 poster presented at the 9th Annual Meeting of the International Society for Medical Publication Professionals might give some information on general approaches. Beyond that are no public information available regarding Pi2’s market success or market share, and I am quite curious about impact of the new collaboration with ProQuest.

Other potential providers of PLD solutions are major B2B specialist information database and service providers, like Reed Elsevier, Thomson Reuters and Wolters Kluwer, who are factually dominating the mass market for literature raw data. Particularly Elsevier has already shown strong interest in providing more customized and mature services to industry clients. They quite recently build a kind of customized product literature service for Sanofi’s pharmacovigilance by combining their database content with the QUOSA literature management software.

 

In-house PLDs

The Novartis pharma group had their own internal PLD since the late 60’s, called “eNova”. This solution has been the most mature and significant PLD I have ever seen. Novartis not only collected literature on their products, they also applied a kind of in-depth ‘digestion’ of reported product findings and clinical data. As a result, the PLD was able to very precisely answer questions on any aspect of real-world drug behavior at the push of a button. “eNova” was finally discontinued and shut-off by Novartis end of 2013, despite the fact that internal analysis had shown substantial positive impact on productivity and individual time savings for product related literature research & analysis of 93% and more.

Roche once also had an internal PLD similar to “eNova”, which was shut-down a couple of years ago already. As a “side effect” corresponding product literature research & analysis activities and workload were distributed across the organisation. For example, each national affiliate had to substitute the service by an own solution to continue mandatory MedInfo deliveries and to comply with regulatory expectations. It goes without saying that this split-up of different solutions and approaches did not really result in an overall productivity increase nor in overall cost savings.

A little later, after negative effects became more and more evident and clear, Roche tried to reactivate their in-house PLD. But unfortunately the reintroduction failed as 2-digit million CHF investments were needed but not provided.

By the way, much more money than continuing the Roche in-house PLD would have costed.

 

Why do PLDs have such a poor standing?

Watching the developments at Novartis and Roche, one automatically ends up with the question for what reason their PLDs were shut-off … despite obvious downsides for the enterprises? Actually, there are some dependencies and basic conditions for the reasonable operation of an in-house PLD. And those dependencies and basic conditions are sometimes contrary to currently practiced management paradigms.

  1. PLDs need long-term view and sustainability. But currently lived management approaches in pharma enterprises are more near-term and quite volatile. Without a strategic integration, a PLD is always in danger falling victim to a short-term budget or structural decision, similar to other internal services. But in contrast to other internal services, such a decision is much more fatal for a PLD, as it cannot be just shut-on again once negative side effects become obvious.
  2. PLDs save money in the in the field. What a fatal dilemma. As a central service, PLD costs are budgeted with a (global) business unit, which does not necessarily benefit itself by the service. On the other hand, corresponding cost savings, e.g. by higher productivity, cut costs for external providers, synergy effects, etc. pp. are effective within the whole organisation, with completely different business units. As a result, budget and benefit are organizationally decoupled. Overall, the enterprise has a tremendous advantage and cost savings by the PLD. But unfortunately this full picture view is less and less shared with individual budget decisions.
  3. PLDs are IT, they are not. Effective PLDs certainly need a powerful IT infrastructure, databases, and more. Unfortunately, this bears the risk to rashly assign PLDs to the IT department. To be very honest, to my opinion that is the wrong place. I also need a PC to work efficiently and powerful, but I am far away from being a software engineer. For me, a clever PLD implementation includes a clear localization within business, at least within a well-established linking function between business and IT.
  4. PLDs are strong as central functions. Only then they can fully exert resulting synergistic effects. In contrast, there seem to be frequent “waves” with in pharmaceutical enterprises to distribute tasks over the whole organisation. The underlying thought is “we save money at Global (e.g. for the PLD), and the work is shared by all”. Funny thought … unfortunately with fatal implications on productivity of associates.
  5. PLDs are designed by information experts for information experts. True, and there are historical reasons. But this approach does not fit to todays reality within pharmaceutical enterprises anymore. During the past 10-15 years, pharma has consequently reduced the number of educated information professionals. As a result, todays users of PLDs are more and more subject matter experts (e.g. medics) without explicit expertise in using professional information tools. And honestly spoken, I so far have not seen many PLDs which serve those new user groups regarding usability adequately.

 

Fazit

An in-house PLD – cleverly designed and implemented – is able to reliably cover the need to know that has been published about a company’s own products. It also prevents troubles with regulatory expectations and authorities, and increases productivity at once.

But “cleverly designed and implemented” also includes a long-term strategic integration within the enterprise as well as a reasonable degree of independence from short-term decisions and tactical changes. Any short-term shut-down of an established in-house PLD bears the risk to create hidden but substantial costs. And in all known cases it had been an irreversible back to zero.

Currently, one of the biggest challenges of PLDs is, to give medics and other non information professionals efficient access to product answers, especially by more productive and intuitive user interfaces. Success will be result of votes by the feet … resp. by the keyboards.

Moving online

Please forgive an old trapper this melancholic flashback. But one of the most inspiring projects during my career was the successful migration of a business publication from print to online around the turn of the millennium. “Inside-Lifescience” was a multi-channel online-magazine covering latest news and trends in life science, pharma and biotechnology.

But let’s start with the roots. The publication was originally developed in early 1999 by a leading German publisher of specialized journals in the fields of life sciences/medicine and the information broker business that I had just founded. The intention of the publisher was to establish a periodical information resource that reflected the emerging European biotechnology industry. We – with our know-how of information research combined with in-depth knowledge of life science technologies and industry – were found to be the right content partner for this project. The result of both expertises was the printed monthly newsletter “BIONEWS”. “BIONEWS” mainly included news from all over the world arranged in categories. This was complemented by an event calendar, links to web sources, and an editorial. “BIONEWS” contained no advertisement and was exclusively financed by subscriptions.

As the aim of “BIONEWS” had always been to cover current trends and to be most up-to-date, we soon realized that a print publication had natural limits regarding timeliness. With the monthly frequency, the news for a single issue had been collected over a couple of weeks. Layout, setting, print, and delivery needed at least an additional week. So, at the time the reader had his copy in his hands, some news were already four or even more weeks old. Not really highly topical! The only way for a print publication to overcome this limitation would had been to shorten its publication dates. But this also would had multiplied the operating efforts and costs.

So, what alternatives did we have? After some discussions we finally decided to move online. This sounds obvious from today’s perspective. But at those times it was absolutely not. Well, honestly spoken, the facts spoke for themselves:

  • an online publication could be updated more regularly (up to several times a day)
  • the editors could react more flexible to incidents of immediate interest for readers
  • there were no more regular expenses for setting, printing, and delivery
  • production could concentrate on content not on layout
  • the production process could be improved through content management technologies
  • new database-based products became feasible
  • the basic content was for free to readers because the financing relied on advertisement and enterprise services

As a result the whole production process from initial content research up to the archives was improved … resulting in a new product and new services at lower costs.

But lower running costs had to be paid with great set-up expenses. As the print version could be produced via the standard production path of the publisher, the online version needed a complete new infrastructure. We found this structure in an information management system that was able to channel incoming as well as outgoing information, and allowed to automatically publish content on the web. This system also could automatically mail electronic newsletters, send SMS messages, and fill WAP pages in parallel to the HTML pages (for generation Y: WAP was an early technology to make web pages visible on mobile devices with – at those times – minimalistic displays). Further technical problems had to be solved. “Inside-Lifescience”, the new name for the publication, needed a web server, and a reader-oriented web layout had to be developed.

Setting up a new information system did not only have a technical perspective but also psychological aspects. Established working behaviour needs to be changed. System users (the editorial staff, e.g.) needed to get an introduction to the new software. The internal “routing” of information was changed. More information has to be shared internally. And I am sure all of you know the sentence “But we have always done it by this, and it always worked fine!”  But I was lucky to have a quite young team showing the flexibility that was necessary to successfully manage those changes.

The publisher now took the marketing part of the project. They had been an established professional marketing partner within the life science industry. They did have the contacts to sell banner places as well as corresponding “Inside-Lifescience” enterprise products (like content delivery for company web sites, e.g.). But they also had to learn, because selling an online banner is not the same as selling advertising space in print journals. So, the project was a challenging and exciting experience for both partners. At those times, somehow comparable to the joint exploration of a new continent.

One important aspect should not be forgotten, as it is still prevailing. Despite all the new media euphoria we did not want to close our eyes for reality. In those early times, only a few online journals and information portals were substantially in financial plus. Online publishing was not really established in means of the return of investment. One reason may had been that internet users were used to have information and content for free, and many people did not really acknowledge the value of high-quality information (to my opinion this has not really changed so far). Back then I was convinced that there was only one promising  strategy to earn the money needed for the maintenance of an online information service: by accompanying products and co-operations. The few financially sound online projects, like “Focus Online” in Germany, showed that this was the way to success at that time. “Inside-Lifescience” had at this point already an advantage because it naturally cooperated with a variety of print journals that were under the roof of the publishing partner already.

Finally, “Inside-Lifescience” started real multi-channel with a web-magazine, an email newsletter, a mobile edition, an AvantGo-channel (at those times for PDAs), and an SMS alert service. And most important … with exciting, interesting, relevant and up-to-date quality content. We offered an always up-to-day view on the biotechnology industry, and had external industry insiders providing editorials. “Inside-Lifescience” lived as a successful online magazine with thousands of readers for a couple of years. It was discontinued when the collaboration ended due to a takeover of the publishing partner. We kept the online platform for a few more years as our corporate publication for clients and stakeholders, resulting in some major project acquisitions. But this is another story.

Revised version of the article “Moving Online – Developing an online information portal”, originally published in October 2000 by Business Information Searcher, ISSN 1365-5760

Bottlenecks in Proteomics

Let’s start with a joke. “What are three Germans doing that you have put into one room? – Founding an association!” In Germany we have associations for everything in the smallest village. Associations of hen breeders, associations of stamp collectors, associations of local singers, associations of hobby gardeners, associations of wine drinkers, associations of The Kelly Family concert visitors, and so on. Since late 2001 we additionally have the German Society for Proteome Research (DGPF), whose very first founding charter was wrote down on a beer mat (well, we are in Germany, aren’t we).

The foundation of the DGPF by scientists and industry representatives was a reaction on latest market and application movements towards protein research. Germany already has had strong Proteomics (& protein) research when others were still chasing the holy grail Genomics. But – to my impression – it was never really well communicated. So, one major aim of the DGPF will be to improve the international knowledge about the high level of German Proteomics.

But why are researchers and the industry more and more focussed on Proteomics? One of the major disadvantages of Genomics approaches is the missing connection between a gene and its cellular function. The fact that a gene has been sequenced does not give us the cellular function of the gene product. That makes genomic results so difficult to interpret. Even the sequence analysis with bioinformatics tools does not yield the full picture. Additional problems arise through the organisation of the genetic information as well as the fact that only a subset of genes is active in a specific cell at a specific stage.

So, scientists are moving to the functional level, to the gene products, to the proteins. And they developed the new term, “Proteomics”, for the complete set of proteins (functions) of a cell in a specific stage, in analogy to “Genomics” that addresses the complete set of genes (information) of a cell.

Similar to other attempts with a large-scale option in industrial applications (drug discovery e.g.), it will depend on the technology developing and supplying industry if Proteomics will get its chance. When I was doing the research for this article I had the impression that some companies just stuck the Proteomics label onto their existing products. This is neither a solution nor does it really fit the researchers needs. But where are the bottlenecks and what has to be done?

There is a dramatic increase of complexity while switching from the genetic to the functional level. A gene is a gene is a gene. There is slight variation caused by introns and foreign elements as well as expression control. But our scientific thinking is dominated by the “one gene – one protein” paradigm, even since the knowledge about posttranscriptional modifications has shown that it is not just that simple.

With proteins one has to view every single candidate in the context of multifunctionality and networking. In many cases one protein is not just one function. It is part of a high-complex cellular network of interacting and cascading activities. The function of most regulatory proteins for example depends on environment (regarding ‘cellular clock’ and location), posttranslational modifications and interacting partners. As a result one protein might have a couple of functions depending on where, when and with whom it is. This puts Proteomics to trouble.

At first, there are still no powerful technologies for many aspects in large-scale protein research available. Friedrich Lottspeich, head of the protein analysis group at the Max-Planck-Institute for Biochemistry in Munich and DGPF-chairman, said that recent methods exhibit great potential but are not yet ready for the industrial job, in drug discovery for example. There are only few suitable solutions for automation and high-throughput. Early stage MALDI-TOF applications work pretty well, in Structural Proteomics e.g.. But problems with high-throughput sample preparation, low abundant and hydrophobic proteins are unsolved. In Functional Proteomics automated interaction-screens based on the 2-Hybrid, SPR (surface plasmon resonance) or TAP (tandem affinity purification) technologies – that are essential to discover the networking aspect of proteins – are at its infancy. Antibody-based biochips already show the direction.

At second, Proteome research results in huge amounts of data. Corresponding to the higher complexity, Proteomics causes exponentially more data than Genomics does. But drug discovery (and scientific research in common) is not just collecting data, even if one might suspect some scientists to think so. No, the scientific progress depends on results derived by the analysis and interpretation of collected data. And this is getting more and more difficult with increasing complexity.

Finally, the complexity of protein functionality has to be taken into account while moving forward. An attempt to this is the field of Integrated Proteomics that considers various views by the combination of data coming from different approaches and sources. But . this again increases not only the total amount of data to be analysed but also the level of complexity. According to Thomas Franz, head of the Proteomics core facility at EMBL Heidelberg, existing bioinformatics solutions are not able to quantitatively and qualitatively analyse the produced data. This opinion is shared by a couple of colleagues working in the field. Scientific teams are analysing the data manually again because this is more effective and still yields the most meaningful results.

The conclusion is an answer to my question what has to be done. There is a deep need for at least a) large-scale protein research technologies, b) suitable bioinformatics solutions and c) Proteomics-optimised devices.

I am curious about the future development of Proteomics. It might be overrun by other “-omics” in public attention. But I am convinced that Proteomics will contribute important findings to our understanding of how a cell works. And for sure it is and will be a major market for technology suppliers and bioinformatics companies.

Originally published in April 2002 by Inside-Lifescience, ISSN 1610-0255.