Active Searching for Taxonomy Resources

October 20, 2014  
Posted in Access Insights, Featured, search

“Time is free, but it’s priceless. You can’t own it, but you can use it. You can’t keep it, but you can spend it. Once you’ve lost it you can never get it back.” So muses American businessman Harvey MacKay.

We have no choice in the matter. Time cannot be “saved” …only spent. Our responsibility is to determine how we wish to allocate it. Otherwise, time will not only be spent but also wasted. How valuable, then, are those skills and tools that help us distribute our time in ways that we consider most useful and productive!

Shiyali Ramamrita Ranganathan proposed the fourth law of library science to be: “Save the time of the reader.” Fast and accurate retrieval of relevant information is one of the fundamental arguments in favor of enterprise taxonomy development and usage. Let’s consider someactive search strategies that will help you avoid tail-chasing and wearying labyrinths when searching for project taxonomy resources to assist you in your knowledge management.

Let’s first explore some active search strategies. If you have ever conducted a keyword search on the open web for online taxonomy resources, you may have had some difficulty hitting your target. After simply typing the keyword word “taxonomy” or “taxonomies” or “thesaurus” or “thesauri” into your favorite search engine window, you may have obtained less than satisfactory results. How many of your results were even remotely related to information structures, knowledge organization, or contextualized concepts organized by term?

How can you get better search results in less time?

  • Consider using operators and/or advanced search techniques
  • Isolate exact search phrases for use in full-text searches
  • Once you’ve found a good online resource, take one additional step to find similar results

Each of these three tactics is discussed below.

1. If your favorite search engine allows for operators, try enabling operators under “advanced search settings.” Operators may refer to common Boolean operators (AND, OR, NOT) or may contain different symbols for the same function as Boolean operators. Familiarize yourself with your particular engine’s operators and vernacular. In Google, for example, go to the “Settings” link located at the bottom right of your Google search page (open on the Explorer or Firefox browser). From the drop-down menu (or pop-up menu in this case), choose “Advanced search.” Scroll down to the entry at the left that reads “Use operators in the search box.” search1

Explore this page’s many options. A little time invested here typically yields large dividends in your future searches. You may also decide to use the advanced search boxes or choose to use the Boolean shortcuts like OR, – (NOT), AND. You may also truncate words or employ wildcards using the asterisk *. An additional descriptor, like “business,” “knowledge management” or “project” added to “taxonomy” will better identify your target.

The time you take to carefully construct your search query will help “prefilter” your results and increase their relevance. Your searching will begin to look more like this:

(KM taxonom* OR enterprise taxonom* OR business taxonom* OR project taxonom* or corporate taxonom*) AND (manag* OR software)

2. In order to trim down the number of results, try the strategy of isolating exact phrases for full-text searching with quotation marks (“”). Your search queries will begin to look more like this:

(“project taxonomy” OR “enterprise taxonomy” OR “corporate taxonomy”)

Although it is possible to conduct the same searches in the “advanced search” option of most search engines, why not “save time” by learning a few of these shortcuts and experimenting with them?

3. Once you’ve discovered a resource or webpage that contains relevant content, try utilizing a few related searches that will expand similar useful and relevant results. In Google, for instance, you can type: info: (followed immediately by the web address – the Uniform or Universal Resource Locator (URL). Here’s an example of what you might type into the search box window if you liked what you saw at


After you receive your results, look to the bottom of the page for increased options such as these:


Consider another example. If you were pleased with what you found at, then try typing:


Type the following terms into your search box window, and note the differences and nuances of the results rendered: (NOTE: Leave no space(s) between letter(s) that border the colon. Cf. the example above.)

Info: (Cut and paste the URL from the relevant site here.)

Related: (Cut and paste the URL from the relevant site here.)

Link: (Cut and paste the URL from the relevant site here.)

You might also try typing the URL into (Beware; many of the “results” here are interspersed with ads!)

In the next post in this series, we will consider additional active search strategies to assist you in using time wisely to ferret out resources for your taxonomy needs.

Eric Ziecker, Information Consultant
Access Innovations, Inc.

Covered In Metadata: Semantic Fingerprinting

October 13, 2014  
Posted in Access Insights, Featured, semantic

Once upon a time, there was a real art to finding something in a library, and the card catalog, in a way, was the medium. Those giant wooden cabinets were filled with mysteries to be uncovered, but the first mystery was how to navigate it. There were always the artists—the librarians—who could help you through it, but that is really only viable for a limited amount of content; librarians have other duties, after all.

For people with larger goals —authors, researchers, and the like—it could get complicated really fast. They’re never in the position of needing a single book or a single article; they need a mountain of them. If it’s a very narrow subject they were working in, it made things a little easier. But the broader the subject or the greater the number of narrow subjects, the more quickly it became clear just how much work it would be to successfully find everything they needed.

What’s more, it was virtually impossible to enrich the research with material they never knew existed, at least not without the direct help of a colleague or expert who could recommend new material to them.

It gets even more complicated when you start to consider expanding the search beyond specific titles into authors, publishers, or tangentially related subjects. Then you start to get into cross-references; those are complete sets of records in themselves. By now it’s an unmanageably huge amount of information to deal with, and librarians, magical though they may be, could only do so much.

The thing is that “once upon a time” really isn’t that long ago; advances in information sciences have turned that magic into something more accessible to everyone. Tagging documents with metadata to identify the author name, institutions, subject matter, or any relevant piece of information at all brings all of those card catalogs into a single databank, accessible all at once.

It opens up wide possibilities for content usage, but what about applying those same “tagging” principles to people? We like to call it Semantic Fingerprinting because, it turns out, tagging a person’s electronic record actually does reveal the uniqueness of the person.

In academic publishing, the benefit of this fingerprinting is pretty clear. Knowing the author’s name, date of birth, institution, or really anything you want allows him or her to be identified quickly and, more importantly, with accuracy. This is important for a couple of reasons.

On the author’s side, having proper credit for their work is of course important, and, with their name and, likely, their institution already tagged in their book or article, their identity is pointed straight at their tagged record, proving them the true author. Additionally, if the subject matter they’ve written about is tagged in their record, as well, a new article submission can be placed intelligently into the peer review process. If you write about nanotechnology, experts in the field can quickly be identified, and be sent the article for review, eliminating one of the many possible slowdowns in a tedious, but necessary process.

For the publisher, it’s just as important, as it makes categorization of various authors easier. With the subjects tagged, it becomes really easy to see in which journal the article belongs, but it also aids in sales and subscriptions, which are becoming more important to the whole process than ever.

Subscription prices are going up while institutional budgets are slashed, meaning that a university has to make some hard choices about which journals are most important to them. So for the publisher to be able to look at their author and institution identities is a big deal. If they get word that a university library is planning to cancel their subscription, they can match who from that institution published in the journal and suggest that maybe they reconsider, given that their faculty has published in the journal whatever number of times over the last ten years. It’s unfortunate to think of the bottom line all the time, but we’ve all got to keep the lights on.

Many of these same things apply for researchers, which gets back to the original problem of sifting through content in a library. When the document is tagged, the researcher can quickly identify all of an author’s published work, when it was published, and on what subjects. From those subjects, they can then see other authors who published on the same or related topics and, soon, you see a network of information starting to build that is massively useful to people all throughout the publishing process.

And while we talk about academic publishing a lot around these parts, the private sector can get just as much use out of Semantic Fingerprinting as the public. Suppose, as a random example, the manager of a corporate marketing department is trying to put together a team of people for a big campaign. The manager needs people with very specific skills that may or may not go along with their job descriptions. Let’s say that the manager had employees take a survey at some previous point, which suggested individual skill sets. What if, then, each individual had those skills tagged within their employee record? Rather than have to hunt or, worse, simply hope that the chosen employees can perform the duties, the manager could just look at those skill tags and pinpoint exactly who will do for what task.

I don’t know how many companies out there are doing stuff like that, but I can see so many possibilities in working with semantic fingerprints. I can imagine possibilities in just about any industry I can think of, and I’m sure there’s a mountain of uses that I haven’t fathomed yet. In connection with Linked Data, it could be almost endless.

Daryl Loomis
Access Innovations

Marjorie M.K. Hlava Named Winner of the 2014 ASIS&T Award of Merit

October 6, 2014  
Posted in Access Insights, Featured

ASIS&T, the American Society for Information Science and Technology, has announced that it has selected Marjorie Hlava, President of Access Innovations, Inc., as the 2014 Award of Merit winner.

“The Award of Merit is our society’s highest award,” explained Richard Hill, Executive Director of ASIS&T, “and Margie has definitely earned it through her achievements. She has created opportunities where none previously existed, thereby expanding the field itself. In addition, as a member of ASIS&T, she has contributed countless hours of volunteer service to the great benefit of the Society.”

“Marjorie Hlava has spent forty years demonstrating how published theories of information science work in large-scale environments. Information professionals, and in fact people not even aware they are part of the information industry, use things she has created without realizing it. She has a keen eye for identifying ways in which fundamental principles of knowledge organization can become useful in the less-than-perfect environment of everyday applications,” wrote Harry Bruce, ASIS&T president, in the meeting program. “She could easily have led an academic life; however, she chose a different, and in many ways more difficult, way of shaping information science. She created a company and set of products and solutions (standards, schemas, languages, databases, taxonomies) that both applied principles and drove research by demonstrating what worked and what needs to be done.

“Patents, a diversity of projects, and a spirit of entrepreneurship illustrated strengthened key linkages between associated fields. Her nomination packet includes five letters, all of which are from significant information scientists, demonstrating how Marjorie is an example of how ASIS&T is unique in supporting a special blend of applied and theoretical work.”

Ms. Hlava was interviewed in April of 2014 as part of the “Leaders of Information Science and Technology Worldwide: In Their Own Words” initiative sponsored by the ASIS&T under the guidance of the Special Interest Group, History and Foundations of Information Science (SIG/HFIS) and the 75th Anniversary Task Force of ASIS&T. A video of this interview is posted on the ASIS&T website and can be viewed here.

“I am surprised, delighted, and humbled by this honor,” commented Ms. Hlava. “I have always enjoyed my membership in ASIS&T and found the presentations to be a springboard for new ideas to try.”

Access Innovations CEO Jay Ven Eman observed, “The insights Margie has gained from attending the meetings and networking with other members have fueled her desire to undertake new (and sometimes daring!) developments with the company’s service offerings and, later, the software.  Conversations with other members have helped her find creative ways to address the applications of information science and its challenges. We look forward to many more years of continued involvement in ASIS&T.”

According to the ASIS&T website, “The Award of Merit was established in 1964 and is administered by the Awards and Honors Committee. The purpose of the award is to recognize an individual deemed to have made noteworthy contributions to the field of information science. Such contributions may include the expression of new ideas, the creation of new devices, the development of better techniques, or substantial research efforts which have led to further development of thought or devices or applications, or outstanding service to the profession of information science, as evidenced by successful efforts in the educational, social, or political processes affecting the profession.

“The award is a once-in-a-lifetime award and is sponsored by the Society-at-Large and is administered by the Awards and Honors Committee. The award shall be announced and presented to the winner by the ASIS&T President, with appropriate ceremony, at the banquet of the annual meeting of the Society.”

The presentation of the Award of Merit and the society’s other awards is to be made by Harry Bruce, the current ASIS&T president, at the upcoming ASIS&T Annual Meeting in Seattle, Washington at the Awards Luncheon on Tuesday, November 4, 2014.

About Access Innovations, Inc.,,
Founded in 1978, Access Innovations has extensive experience with Internet technology applications, master data management, database creation, thesaurus and taxonomy creation, and semantic integration.  Access Innovations’ Data Harmony® software includes automatic indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet productions environment needs. Data Harmony is used by publishers, governments, and corporate clients throughout the world.

About ASIS&T
Since 1937, the Association for Information Science and Technology (ASIS&T) has been the association for information professionals leading the search for new and better theories, techniques, and technologies to improve access to information. ASIS&T brings together diverse streams of knowledge, focusing what might be disparate approaches into novel solutions to common problems. ASIS&T bridges the gaps not only between disciplines, but also between the research that drives and the practices that sustain new developments. ASIS&T counts among its membership some 4,000 information specialists from such fields as computer science, linguistics, management, librarianship, engineering, law, medicine, chemistry, and education – individuals who share a common interest in improving the ways society stores, retrieves, analyzes, manages, archives and disseminates information, coming together for mutual benefit.

Fingers in Every Pot — Metadata Through the Publishing Pipeline

September 29, 2014  
Posted in Access Insights, Featured

Nobody is going to deny that publishing is and always has been a sometimes messy process, but sophisticated uses of metadata and taxonomies can help clean it up. It fascinates me how intimately it can work in every step of the process to make it easier on everybody, from the author writing the piece to the institution that publishes it, all the way to its marketing and use.

Let’s start at the beginning, with the writer. Presumably, the person is an expert in his or her field, or at least working toward it, but that absolutely doesn’t make them an expert in searching for the information they need. That’s what always made library sciences so valuable, and while they’re still extremely valuable (don’t want to offend my librarian friends out there), the rise of enriched metadata means that searching and finding the content they need to conduct their research can be laid out clearly and concisely in front of them. This allows them to function in a noise-free environment and produce their best possible work.

So they’ve done all that and it’s time to submit the work to publishers. As we’ve seen, this can be an ordeal, but semantically enriched content, once again, can be implemented to ease the process for both the author and the publication. Tagged with relevant thesaurus terms, the submission can be analyzed to identify its subject, where it can then be more easily sorted and sent to properly qualified experts in the field for peer review. This might seem like a small part of it, but any amount of time saved is a big benefit to the author, who is often under the crushing weight of tenure deadlines.

However, once the author’s submission is out the door and in the hands of peer reviewers, it goes through its revision process, sent back and forth to get everything squared away. This, of course, can take a long time, but once the work is ready for publication, metadata begins to take on its most important role. Those same (or similar) subject terms that helped direct the submission into peer review now help to make certain that it is now directed to the most relevant possible journal, ensuring that the right people can easily find it.

This is the point at which, with the right tools and the right people in place, the metadata can really shine, because there’s so much that can be done with it. Once an article is published, either in an open access format like PLOS One or a more traditional subscription journal, its metadata can be used for an increasing number of purposes, anything from simple organization to highly advanced linked data.

Whatever that data is used for, the most important thing is that the content can be found. Everything after that is useless if it sits in the ether, hidden so nobody can read it. And as is likely fairly clear by now, the metadata is absolutely crucial at this end stage, where other researchers need to locate the content to conduct their own work. Just like original authors’ needs for clear, concise search results when their process started, if these new researchers have their results muddled with bad results and noise, let alone a result that get missed completely, it’s much more difficult to find the necessary content. This can prevent authors’ work from reaching the people who require it and keep it from furthering work in the field.

That’s counterproductive to research, obviously, but it’s also totally unnecessary. It shouldn’t take much to get people to see how this kind of metadata enrichment can make authors’ and publishers’ lives easier. It’s relatively new and there are a lot of buzzy words attached to it, but that doesn’t change the value of the core concept.

The good news is that semantically enriched metadata is starting to show up all over the place. Software like Data Harmony from Access Innovations automates much of this to help academic journals and institutions facilitate research. The pile of metadata is already gigantic, so it’s vital that the new content that journals are constantly publishing gets analyzed and tagged swiftly and accurately.

To me, the furthering of research is the most important thing, but there is another step in the process, that of marketing and sales. It’s the same principle as with everything else here: you can’t buy what you can’t find. The place with the clearest inroads to the content the consumer is looking for will be the one that wins. But the truth is that the sooner that people adopt the ideas behind semantically enriched metadata, the sooner it is that we all win.

Daryl Loomis

Access Innovations

Inline Tagging Facilitating Linked Data

September 22, 2014  
Posted in Access Insights, Featured

Access Innovations recently debuted Data Harmony Version 3.9. Within its new features and fixes is a sneakily clever module called Inline Tagging. On the surface, it does exactly what the name says: It allows the user to see in a piece of content, quickly and clearly, what concepts in the text, exactly where in the text, triggered subject tagging by the software. It seems simple enough, a handy tool, but upon closer inspection, it really opens doors for the user.

Once the text is tagged, it becomes a question of what the user wants to do with it. That’s where the possibilities start to get really intriguing. In part, it allows an editor to do some very helpful things internally. Once term indexing triggers are tagged in a document, the editor could, for instance, go to the terms’ thesaurus listing, where they can see broader and related terms, along with synonyms or any number of facets of the taxonomy.

Thus, Inline Tagging is a helpful tool in aiding the editing process, but my thoughts are moving more toward the end user right now. It’s they who can truly reap its benefits. That’s because Inline Tagging can easily serve as a conduit for linking data, which has the potential to dramatically enrich a user’s search experience; absolutely crucial, especially in publishing.

We’ve already seen how massive the amount of data in the world has become, and we’ve seen the need to understand and control it. We see the emergent patterns in that data, and we work with it to discover new avenues for viewership or revenue or education. But that’s using just a handful of datasets. No matter how large they might be, the size of that data pales in comparison to the data in the world. If we could harness that power, what could we do?

Linked data, which has emerged as one of the most important concepts in data publishing, could well be the answer. In a database, one that implements Inline Tagging, the key terms and concepts in the documents are located at their occurrences within those documents. By using Inline Tagging, you turn a passage of text into a data item that can be quickly plucked for analysis. But how does that help us?

It can work on a number of levels. This can be as simple as having a taxonomy term link to a definition page, with broader and narrower terms, synonyms, etc. That right there can help with clarity, speed, and accuracy, but that’s just the beginning. There could also be a more substantial relationship between a thesaurus and the world’s data, one that allows users to take those data items and send them out to mine the web for related tags, drawing them back to the original page as related materials.

Say somebody is starting to write a paper on how a cheetah raises its young. They go online to research it and find a paper that addresses the topic perfectly. Now, this website also happens to implement linked data, so when the user queried “cheetahs raising young,” not only did the search result in a strong match on the site, it also, in turn, queried the cloud of data in the web. On its own, it locates information on other sites on the same topic and pulls down additional links: a wiki page, other related articles and papers, videos, or really anything.

It’s well known that people love one-stop shopping. That’s true in retail and that’s true in publishing. If the researcher can get all that information, curated personally for them in a clear, concise, and most importantly, highly accurate manner, they’ll almost certainly make that site their primary resource.

Some of the concepts have already been implemented in places, notably the BBC, whose unique Sport Ontology created for the 2012 Olympic games revealed just some of the potential of linked data. The idea was to personalize how the viewer watched the Olympics, understanding that enriched, relevant information delivered to the viewer in real time will drive traffic to the site.

There are even bigger ways linked data is being used, or potentially being used. The European Union is funding a project called Digitised Manuscripts to Europeana (DM2E), which aims to link all of Europe’s memory institutions to Europeana, the EU’s largest cultural heritage portal, to give free access to the stores of European history.

What if, in theory, a medical organization had access to linked data during flu season? That organization could pull information from not only medical records, but from, say, community records, school data, and other sources to try to predict when and where outbreaks might occur to minimize the damage. Certainly, there are issues with privacy and other hurdles that would need to be addressed, but even though that example is theoretical, the potential is massive.

Of course, proper implementation of linked data takes plenty of cooperation, so the jury is still out on how much or how soon sophisticated linked data usage could come about. The possibilities for academia, cultural awareness, and even retail look too enticing for it not to flourish. I, for one, am looking forward to a day where information I never dreamed of is right at my fingertips. I don’t know what it’s going to be, but it should be a fun ride.

Daryl Loomis
Access Innovations

Access Innovations, Inc. Releases Data Harmony® Metadata Extractor Version 3.9

September 15, 2014  
Posted in Access Insights, Featured, metadata

Access Innovations, Inc. has announced that the Data Harmony Metadata Extractor is available as an extension of MAIstro™, the flagship thesaurus and indexing application in the company’s Data Harmony software line. Metadata Extractor is a managed Web-based service for revealing the hidden structure in an organization’s content, through superior data mining of publication elements, to normalize and automate document metadata tagging for the benefit of the organization.

Data Harmony Version 3.9 software achieves user-friendly integration of a taxonomy (or thesaurus) with an existing content platform or publishing pipeline. Patented indexing algorithms generate terms that describe what documents are really about, and precise keywords are attached for retrieving those content objects later, under different conditions. Among other benefits, deploying Data Harmony for subject tagging throughout a document collection creates a better search experience for users, because the results they get are closer to the point – there’s less extraneous material.

Leveraging a patented approach to text analysis for better keyword tagging is only one of the advantages to be gained from implementing the new Metadata Extractor Web service.

Quality Metadata Is Essential for Effective Content Management

To enhance the quality of metadata, this Data Harmony extension generates a complete bibliographic citation, creates an auto-summarized abstract of an article’s content, handles author parsing, and assigns subject keywords automatically. Metadata Extractor takes an unstructured or semi-structured article as input and returns an XML document with richer, more descriptive information captured in the metadata elements.

The Metadata Extractor extension identifies descriptive information in a document, distilling and normalizing it in a method far more sophisticated than merely matching keywords in text. The extension attaches this enhanced metadata to boost long-term value of the content object. It’s been shown that high quality metadata, consistently applied, reduces a common source of user frustration: not finding the appropriate document at the right time, in an oversized, disorganized file system.

Publishers Stand to Gain From Implementation

“Metadata Extractor is an essential addition to the Data Harmony software lineup for scholarly publishers, especially,” said Marjorie M. K. Hlava, President of Access Innovations, when asked to comment on its release. “Since every publication style sheet requires a targeted approach to leverage the most appropriate fields, Access Innovations provides customization supporting each new implementation. The result is a highly specialized output of accurate, consistent metadata for client documents, with subject keywords applied from their own unique vocabulary.”

M.A.I.™ Sets This Metadata Tool Apart from the Rest

“The extraction process uses element-based semantic algorithms mediated by M.A.I., the Machine Aided Indexer,” said Bob Kasenchak, Access Innovations’ Production Manager. “It draws on a set of Data Harmony programs that harness natural language processing (NLP) for targeted text analysis. During configuration, elements in the document schema are specified for metadata extraction, to reflect the structure of input articles. Then, whenever someone processes an article with Metadata Extractor, M.A.I. algorithms go to work surfacing crucial pieces of information to identify that document, and that document only.”

The graphical user interfaces (GUIs) and input elements for the Metadata Extractor Web service are adjustable based on the nature of incoming data and user needs.

Data Harmony Extension Modules

Access Innovations offers an expanding selection of Web-based service extension modules that are opening up new avenues between content management platforms and the innovative Data Harmony core applications: Thesaurus Master® and M.A.I.™ (Machine Aided Indexer).

To supplement an organization’s publishing pipeline or document collection with great tools for knowledge discovery, the Data Harmony Web service extensions operate on the basis of rigorous taxonomy structures, creative data extraction methods, patented text analytics, and flexible implementation options. All Data Harmony software is designed for excellent cross-platform interoperability, offering convenient opportunities for integration in all kinds of computing environments and content management systems (CMSs).

Visit the Data Harmony Products page to explore the range of focused solutions that are presented by Data Harmony Version 3.9 extension modules.

About Access Innovations, Inc. –,,

Founded in 1978, Access Innovations has extensive experience with Internet technology applications, master data management, database creation, thesaurus/taxonomy creation, and semantic integration. Access Innovations’ Data Harmony software includes machine aided indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet production environment needs. Data Harmony is used by publishers, governments, and corporate clients throughout the world.

E-Books and the Evolution of Publishing

September 8, 2014  
Posted in Access Insights, Featured, reference, Technology

Not that long ago, getting published was the big hurdle for a writer to overcome. You could produce all you wanted, but unless you knew how to get somebody to read your random submission, or you were rich enough to self-publish, your writing lived in a drawer, waiting for you to give it to a friend who doesn’t want to read it.

It’s hard to believe how fast technology has opened publishing up to people. Now, anyone with an opinion has a platform, and while it’s as tough as ever to make a living writing, the platform, in many cases, is totally free. So that changes the hurdle from publication to recognition. If everybody has a voice, how do you get heard?

This isn’t just a question of red-hot opinions on social media. The explosion of e-book publishing has enabled writers of all kinds and all backgrounds, and without a character restriction. Whether it’s through a blog, an e-book, or whatever, the gatekeeper has started to disappear, and to a writer who likes getting published, that prospect is thrilling.

But a new gatekeeper has replaced the old. The driving force of the explosion has been the Amazon Kindle. Since it was first issued in 2007, Kindle titles have taken an increasingly large share of the industry, and now make up nearly 20% of all book sales, not just e-books.

That’s astonishingly fast, and the publishing industry has been dragged kicking and screaming behind. It’s easy to see how it could be a painful transition for them. There’s no physical copy to print and they’re out of the distribution game, so publishers naturally make less per book sold than they had in the past. Amazon made deals advantageous to themselves, of course, but sales have continued to increase. The downside is that issues have arisen as a result of Amazon trying to strong-arm publishers who don’t want to play ball.

By the same token, writers make less in royalties than they once did, as well. That’s the sad part, I guess, but the positive side is that more people are writing and more ideas are floating around, which is a beautiful thing and vital to the advancement of culture. It also presents a brand new problem for the industry: information overload.

As long as there was traditional publishing, there was a structure in place to determine what writing was deemed “worthy” of printing. It kept dangerous or controversial views out of the public, sure, but it also filtered out the garbage. Academic publishing still has its review system in place to make sure a work is suitable to print, but the non-academic side now has little to no filter.

Let’s face it; for all the good that open access to publication can do for society, it also means that one may have to wade through a lot of it to find high-quality, relevant material. So the question becomes how to access it so that every time you want to find something, you don’t have to filter through a large amount of irrelevant and useless material. It’s for this reason that data management has become so vital. Its use has resulted in revolutionary new ways to look at publishing.

The basic fact of having an individual platform is big enough. But there are larger, more groundbreaking efforts to take advantage of the opportunities the technology has afforded us. Norway, for instance, is in the process of digitizing all of its books, all of them, to make them available online to anyone with a Norwegian IP address; the Digital Public Library of America is a growing resource connecting libraries across the country; and the Public Library of Science has turned the paradigm of academic publishing on its ear.

The concept of the digital library isn’t new. Project Gutenberg has been around since 1971. Little did we know back then what kind of value that might have. It’s only becoming clear now that analytic software has become so advanced. For Amazon, books were a means to mine customer data for other products. Now, that kind of data mining is commonplace. It doesn’t have to be about sales, though. In these library projects, that same level of data mining can be used for all sorts of purposes, from recommending new reading materials to a better understanding of a student’s learning habits.

The potential in these projects is limitless, and it takes innovative thinkers to look for patterns and derive ways to utilize them. But the most important thing to me is that what I write, what anybody writes, can be published and accessed for all to see in one form or another if somebody is interested. After all, if I want to read about new methods in cancer treatment or some crazy person ranting about aliens, I should have that right, and so should everyone.

Daryl Loomis
Access Innovations

Taxonomy in the Pipeline, Part 3: Positive Feedback Loops

September 1, 2014  
Posted in Access Insights, Featured, Taxonomy

In her 1996 paper, The Rage to Master: The Decisive Role of Talent in the Visual Arts, Ellen Winner presents a concept she calls, well, the “rage to master.” The idea is that intellectually gifted children have a natural inclination to focus on a subject and immerse themselves in it until they reach mastery.

With proper support, the “rage to master” creates a positive feedback loop. Their interest combines with their gifts, enabling him or her to more easily grasp a topic than a more average individual. This provides a feeling of satisfaction, reinforcement that encourages the child to continue mining the subject. Using the initial knowledge as a springboard, the cycle repeats itself, creating an outward-spreading spiral of knowledge.

Data Harmony has something in common with that gifted child: the feedback loop in its indexing. The software knows nothing at first, but when it is fed content, its subject of choice, and is given support and encouragement in the form of taxonomy building and editorial analysis, it can start the learning process.

With one piece of content, it can only learn so much. It grows with each new piece, the next feeding off what came before, but it needs consistent and diligent editing of those results. Given that, the software can become progressively smarter.

Just like with the gifted child, though, who can never learn everything about the given subject, the feedback loop that indexing software can create won’t last forever. Eventually, progress will slow down. There’s a big difference between the highly accurate search results it delivers and perfectly accurate search results, an unattainable goal.

Voltaire’s aphorism, “Perfection is the enemy of the good,” applies well here. The “rage to master” in the gifted child depends on progress and satisfaction. Attempting perfection undermines both. Progress will slow to a halt, denying the child the satisfaction that was the driving force in the first place.

Of course, we’re talking about software here, so feelings and stuff like that don’t actually apply. Where it does apply is with the user, though, who “motivates” the software by feeding it content. They are the impetus for software’s education, giving it new material while honing and fine-tuning the output. All of this delivers accurate results and the user gets the feeling of satisfaction.

Indexing software has the “rage to master” content because it was built to serve that purpose. It can’t do anything alone, though. It takes a dedicated team of editors to feed it that content and interpret the results. The responsibility is on them to understand how to leverage the results into valuable commodities. Without that side of it, the software achieves very little.

The emergence of Big Data has made this increasingly vital to business in industries of any stripe. The amount of data is growing at an astonishing rate and shows no signs of slowing down. If it was difficult to collect and analyze large amounts of content manually a few decades ago, imagine the struggle today with the glut of tablets, phones, and computers collecting and transmitting data every moment of the day.

There is so much out there that even a large team of editors can struggle to sort and analyze it with much effectiveness or insight. But this is exactly where the feedback loop created by indexing software can change the game. The software speeds the process, facilitating the analysis, but it can’t make decisions on its own. The editors are absolutely crucial to the accuracy of the software’s output. It starts with an analysis of a single batch of content, but with their guidance, that analysis builds on itself with each new batch. Before long, patterns start to emerge.

Now, the people who would have had to endure the tedium of slowly going through the data by hand can work with these emergent patterns instead. This is a far more meaningful way to interact with data and enables new ways to look at the results. Now, people can more quickly and easily identify and react to trends in their industry.

In publishing, this means understanding how users search for content and potentially directing them to content they may not initially have found valuable. Using Data Harmony, the publisher has a controlled vocabulary that narrowly and accurately directs searches, but it also allows them to observe and analyze how the user searches and what else they search for, which gives them tools find patterns in their customer base and tailor future initiatives to their specific needs.

The mountain of data in this world is only going to continue to grow, so while large-scale analysis is important today, it will be even more important tomorrow, next week, and in a year. Who knows what the landscape will look like in a decade, but we can safely speculate that the positive feedback loop that emerges from software like Data Harmony will enable organizations to handle it, no matter how massive it may have grown.

Daryl Loomis
Access Innovations

Data Harmony® v.3.9 Named 2014 Trend-Setting Product by KMWorld

August 25, 2014  
Posted in Access Insights, Featured

Access Innovations, Inc., the industry leader in data organization and innovator of the Data Harmony® software suite, is pleased to announce that KMWorld has selected Data Harmony 3.9 for their Trend-Setting Products list for 2014.

“We enhance and enlarge the Data Harmony offerings every year. This year the suite has increased to 14 modules. It is vitally important to stay at the forefront of knowledge management. With Data Harmony v.3.9, we have delivered the most integrated, flexible, and streamlined user-friendly semantic enrichment software on the market,” notes Marjorie Hlava, president of Access Innovations, Inc. “We will continue developing new and innovative ways to analyze, enhance, and access data to increase findability and distribution options for our customers.”

The proven, patented Data Harmony software is the knowledge management solution to index information resources and, in 2014, pushed farther into the future with the inclusion of Inline Tagging, which automatically finds and labels text strings, and Smart Submit, a module that greatly streamlines the author submission process. With these in place, Data Harmony offers a richer, more advanced, and friendlier customer experience.

The Trend-Setting Product awards from KMWorld began in 2003. More than 650 offerings from vendors were assessed by KMWorld’s judging panel, which consists of editorial colleagues, analysts, system integrators, vendors themselves, line-of-business managers, and users. All products selected demonstrate clearly identifiable technology breakthroughs that serve vendors’ full spectrum of constituencies, especially their customers.

“Data Harmony was selected by the panel because it demonstrates thoughtful, well-reasoned innovation and execution for the most important constituency of them all: the customers,” explained Hugh McKellar, editor-in-chief of KMWorld Magazine.

Data Harmony v.3.9 is available through the cloud, a hosted SaaS version, or an enterprise version hosted on a client’s server. More information about Data Harmony and the 14 software modules is available at


About Access Innovations, Inc. –,,

Access Innovations has extensive experience with Internet technology applications, master data management, content-aware database creation, thesaurus/taxonomy creation, and semantic integration. Access Innovations’ Data Harmony software includes machine aided indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet production environment needs. Data Harmony is used by publishers, governments, and corporate clients throughout the world. Access Innovations: changing search to found since 1978.

About KMWorld –

KMWorld ( is the leading information provider serving the Knowledge Management systems market and covers the latest in Content, Document and KnowledgeManagement, informing more than 40,000 subscribers about the components and processes – and subsequent success stories – that together offer solutions for improving business performance. KMWorld is a publishing unit of Information Today, Inc. (

Taxonomy in the Pipeline, Part 2: The Need for Quick Publishing

August 18, 2014  
Posted in Access Insights, Featured, Taxonomy

Regardless of discipline, there’s one thing that connects most academics I’ve encountered: the desire to keep practicing their respective fields. They’ve spent years cultivating their expertise and want to make a difference in their field. But in order for that to happen, they all share the same obstacle: tenure.

Increasingly, universities are favoring adjunct jobs over tenured professorships. When one looks at it from a business perspective, as administrations with budgets are bound to, it isn’t hard to see why that happened.  They get to pay less for the same work (though maybe not the same quality of work) and they retain power over the adjunct’s job security.

Whether one agrees with that policy, it makes a certain kind of sense from that side of the pipeline. The system isn’t exactly ideal for the academics in adjunct positions, though, whose lack of job security means that, year after year, the potential of finding a new job (another adjunct position, likely) weighs heavily on their minds. Nobody can do great work under that kind of pressure.

Finally, they do get that tenure-track position. Initially, it might seem like the hard part is over, but it’s only just begun. Convincing a university administration to offer a position is one thing; it’s a whole different story when it comes to the thing that most fuels a university’s engine: publication.

It’s inviting to think high-mindedly about higher education, but a professor’s value is based far more on their academic prestige and contributions to the field, at least at the administrative level, than the professor’s skills as an instructor. These contributions are marked by the quantity of articles published in academic journals and by the prestige of those journals. There really is no other road to tenure.

It’s a cutthroat game and professors are playing for keeps…they have to. There aren’t more total jobs on the academic market; the tenured positions are replaced by adjunct ones at the first available opportunity. Those with tenure hold onto to the privilege for dear life, and rarely does a seat open up at the table.

It’s a simple equation then, once they do find a seat, why the institution would demand publication. The institution wants prestige, which they get through having a prestigious faculty that publishes in renowned journals. They select for it, because it’s a bottom-line situation for them; a well-respected faculty means a higher class of student, which means a higher rate of tuition and a better result at the end of the fiscal year.

This is why it’s vital for the journal itself to make sure that what is printed on their pages meets their academic standards. Enter the peer review process. While it was designed to uphold academic rigor (and often succeeds at that purpose), it has the consequence of acting as a gatekeeper for those seeking tenure. That consequence may not have been intentional, but it has become a growing issue.

Submitted articles, certainly, must be vetted for accuracy and content, but they also must be filtered so they get into the right hands for peer review. This process takes time—always has—but it has grown even slower in recent years. With fewer tenured positions, there are fewer people available to review articles. The number of articles hasn’t necessarily changed, though, so those available are now busier than ever and, unfortunately, less attentive on top of it.

The trouble is that those on the tenure track only have so much time before their window closes. It can take months and even years for an article to slog through the pipeline, often preventing viable candidates from receiving tenure for the simple and fixable issue of delay. This inefficiency does a grave disservice to the very people the system was designed to help.

Without wholesale change in university administration mentality, the issue will not fix itself, so it must be addressed from a new angle. By identifying and analyzing the metadata present in a given article submission, it becomes clear where the submission comes from and who it should go to, which can help to streamline the process and make it easier on both the author and the peer reviewer, which will subsequently speed up the publishing process.

Data Harmony software is able to take care of this quickly and easily with the Smart Submit module. Using the article metadata in conjunction with a taxonomy, Smart Submit automatically identifies the subject areas covered in a submitted article. With that information, and with a properly designed management system,  a publisher can find qualified peer reviewers for the submission and ensure that reviewers don’t get overwhelmed with submissions. A lighter workload means that more time and care can be taken with a given submission, making for a better work environment and, potentially, a smoother path through the pipeline.

Academic publishing is a two-way street. Publishers need authors to write articles to populate their journals. Authors need journals to publish their research, which furthers their career and their field. When the two sides work together, that’s when a field of study can really flourish.

Why set up these barriers? It should be difficult to get published in a prestigious journal because academic rigor demands it, not because of an inefficient system that doesn’t help either side of the system. Smart Submit won’t solve all the problems an author might face in getting published, but getting the submission and review parts of the process streamlined and more transparent will make the process less frustrating for users and, ideally, speed up an arduous process that often hinders, when it should be an avenue for fresh voices to be heard.

Daryl Loomis
Access Innovations

« Previous PageNext Page »