Access Innovations, Inc. Now Accepting Presentation Abstracts for the Eleventh Annual Data Harmony Users Group Meeting

July 7, 2014  
Posted in Access Insights, Featured, Taxonomy

Access Innovations, Inc. is pleased to announce the Call for Presentations for the 2015 Data Harmony Users Group (DHUG) meeting. The annual DHUG meeting is held every February at Access Innovations company headquarters in Albuquerque, New Mexico. DHUG 2015 is the eleventh annual meeting and will focus on leveraging of taxonomies and tagged data, techniques for integrating tagged data flows into production cycles, and inventive ways to improve the user experience.

The theme for the meeting, “Beyond Subject Metadata, or, So you have a Taxonomy!… now what?” urges Data Harmony users to ask questions such as the following:

  • What do I do now that my content is tagged?
  • How do I integrate that tagged content into my workflow or production cycle?
  • How can I get my newly-tagged content in front of my users?
  • How can I improve the search experience for my users who want to access these information assets?
  • Are there other features I can add based on the metadata tagging now in place?
  • What other implementations can I set up to capitalize on content objects organized around my taxonomy?

For the first time, Data Harmony users can now submit presentation proposals using the company’s Smart Submit software extension module, at http://www.dataharmony.com/dhug/submissions. The system is a full working implementation of the module and demonstrates how easy it is to use. The deadline for inclusion in the preliminary program is September 20, 2014.

In the DHUG 2015 implementation of Smart Submit, the first screen includes fields for entering such information as title, creator (author or presenter, usually a DHUG member), abstract, contact information, and a brief biography of the presenter. Optionally, the user may choose to upload a PDF or Microsoft Word file. There are also some fields customized for the meeting organizer, such as on what day of the week a presenter would prefer to be scheduled, and how long his/her presentation will be.

In the second screen, Smart Submit uses Data Harmony’s M.A.I.(TM) (Machine Aided Indexer) software module to display  suggested indexing terms from the Access Innovations thesaurus to characterize the presentation. M.A.I. bases its automated indexing assistance on the text in the title, the abstract, and any PDF or Microsoft Word document that was uploaded via the first screen. The presenter chooses to retain or remove each of the suggested terms and may add additional terms from the thesaurus. The system also allows for searching the thesaurus and adding terms from the search results view.

“This is an exciting addition to the DHUG meeting planning process,” remarked Heather Kotula, Marketing Coordinator for Access Innovations. “We made it a priority to showcase our own software this year. Using Smart Submit to collect presentation proposals is going to make my job of organizing the meeting easier, faster, more complete, and more accurate.”

DHUG registration includes breakfast, lunch, and breaks with refreshments for all five days of the meeting, February 16th-20th, 2015. A networking reception will be held Monday evening at the University/Midtown Hampton Inn. On Tuesday evening, dinner will be provided for all attendees at a unique Albuquerque attraction. The University/Midtown Hampton Inn is the primary DHUG meeting hotel, offering a $79 nightly rate for members.

For more information about DHUG 2015, please visit http://www.dataharmony.com/dhug/dhug2015.

Inline Tagging – What’s to Know?

June 30, 2014  
Posted in Access Insights, Featured, metadata, Taxonomy

Data Harmony released their Inline Tagging Web service extension recently – let’s talk about inline tagging software and information environments well-suited to benefit.

Web developers are implementing inline tagging software in an increasing variety of information environments, spurred on by the creativity of users requesting new features based on accurate placement of inline tags. And it’s probably safe to say many users aren’t aware it’s inline tagging that propels some of the innovations they enjoy in their graphical user interface (GUI)… at the level of the onscreen text.

Data Harmony recently released their Inline Tagging Web service as one of the Version 3.9 ‘extension modules’ – causing me to wonder:

  • What kinds of Web computing environments are well-suited for leveraging subject tags at the level of inline text?
  • What is inline tagging good for? What can a subject tag accomplish when it’s been matched to a specific word’s location in the input text?
  • What is the Data Harmony development team’s vision for implementation of the Inline Tagging extension?
  • Can tags other than subject indexing terms be deployed for inline tagging?

To begin at the end of the tale, the answer to the last question is ‘Yes’ – geographical terms and other non-subject tags can be deployed for inline tagging, since inline tags are based on accurate indexing, which in turn is reliant on controlled vocabularies.

Controlled vocabularies such as taxonomies and thesauri can store terms like place names and other kinds of terms that don’t capture strictly conceptual information. Rather, they serve as an authority file for other forms of information, for example, geographical. Inline tagging applications can match non-conceptual terms also, during analysis of input text, and be configured to extend functionality for a purpose like linking to a geographical database for supporting information. For example, if ‘Canada’ were matched in the text, inline tagging might activate a mouse over window that offers the user a chance to go look at a relevant entry from an atlas, or encyclopedia. If the user chooses to click on the word ‘Canada’ in the text, a new interface tab opens to the relevant entry.

Guess what I discovered on taking my questions to the Data Harmony 3.9 developers… implementation ideas!

As a tool for search engines to boost the results of document search and retrieval

When a tag is included inline in a text object found by a search engine, words immediately around the tag (or the entire sentence) can be returned to the search engine, to supplement search results by providing context information about the match’s location in the found document.

The capability to return search term matches along with their context is significant in publications with multiple sections or chapters, to permit easier division into identifiable sections and subsections. Many publishers now offer content for sale in smaller pieces, so each customer can put together a ‘customized electronic book’ by combining chunks from different sources. Search and retrieval in publication collections retrieves relevant sections and subsections for recombination into new content objects. Accurate inline tagging facilitates this highly effective search strategy.

To turn up the volume of social media postings

Inline tagging can add value to search and retrieval within social media communities, increasing the gain of metadata information that’s already there in posts! You can use it for better categorization and linking related Twitter ‘tweets,’ professional discussions, social issue blogs and closed community forums (chat rooms) – for turning up the volume!

A well-placed inline tag inside a blog entry offers a semantic hook for Web applications to latch onto: blog postings can be followed within a certain date range only, or sent to designated recipients automatically when contributors write about any subject of definite interest.

As a lexicological training tool

Inline tagging methods can provide information for a language learner or human indexer about the meaning, form, and usage of words, while keeping the context in view.

In XML databases

XML databases often build indexes of searchable data by polling, at incredible speeds, all text in all available XML files – even for millions of records – and storing results in a repository. Inline tagging offers an alternative to the traditional polling method that often serves as the foundation for document search and retrieval in an XML database. Inline tagging methods enable you to describe fields with unique inline XML tags, for later recognition and retrieval by the spidering engines. Learn more.

Kirk Sanders, Editorial Services
Project Manager, Access Innovations

Rule Base Solutions

People often ask us how much time it will take to manage a rule base with Data Harmony software. We reply with specific customer experience numbers and tell them a few hours per month of editorial time to maintain both the thesaurus and the rule base. One customer of ours, the American Institute of Physics, found that maintaining their thesaurus and rule base takes less than 15 hours per month for 2000 articles per week throughput. Another customer, The Weather Channel, manages breaking news all day long with four hours per month of maintenance. It takes the editorial team just a few hours per month to keep up with the changing trends and events within their field and transfer those into the organizational knowledge base represented by the M.A.I.™ rule base. This is a small investment that provides the organization with the highest level of accuracy in coding (usually well over 90% hits without human intervention), as well as to support analysis of the trends in the business, the creation of author profiles, semantic fingerprints of the entire organizational holdings, and extraction of real meaning for all the data. Other customers, such as IEEE and the US GAO, find the accuracy of their Data Harmony software implementations so high that they now only sample the data periodically to glean new terms and trends. They do not see the need to review every single item.

The real question, though, should be a matter of control. If a rule-based solution maintained by the editorial staff is the approach taken, then full control remains with the editorial department. If a programmatic learning system – the seductive call of the purely automatic system – is the choice, then oversight either remains with the vendor or moves to the IT (information technology) department. The lower accuracy of the indexing returns (usually in the 60% range) means much more time spent by the editorial department on the production of the taxonomy tagged items. The time that would have been spent improving the knowledge base is instead spent in production time processing records, due to lower accuracy levels.

Here’s an example:  let’s assume 1000 articles per month. Using 90% accuracy versus 60% accuracy, how much extra production time is involved?  Let’s also suppose, for easy calculations, that there are 10 terms per article. If our rule base indexing is 90% accurate, then only one term will need to be reviewed, researched, and replaced or discarded. If alternative indexing methods produce 60% accuracy, then there are four terms per record to research, replace, or discard. The time to research a term and decide on its disposition is conservatively two minutes. So two minutes per term at 1 term per article is just 33.3 hours per month. But if four terms (60% accuracy) need reviewing, then 133.3 editorial hours per month are needed – obviously, four times the effort.  Moreover, the rule base improves over time with this small editorial input, so the maintenance time continues to decrease.

A statistical approach can appear to be a gift on a silver platter, but beware – such an approach means more time spent on production, less on building a knowledge base, lower accuracy, higher throughput costs, and no chance to learn about the data through semantic fingerprinting. To make matters even more frustrating, you have little control of the system. It has to be improved and worked on by the vendor or the IT department. New terms require a full revamping of the system each time, resulting in costly delays, rather than the real-time, instant updates that a system based on Java object-oriented programming allows. As a result, the taxonomy is not responsive to the organization’s data.

It is tempting to think that the classification of content can be done without the use of a vetted taxonomy properly applied or that the taxonomy only provides a convenient file folder naming convention. Unfortunately, the cost is high to make that choice. The accuracy is lower, the throughput is slower, and the clerical aspect of the indexing process is increased when you use a statistical system. In addition, control is no longer with the editorial department, but shifted to IT and the vendor. The power dynamic of the choice is clear: IT versus editorial. Who do you want to be in control of your indexing?

Marjorie M.K. Hlava
President, Access Innovations

Data Harmony Version 3.9 Includes MAI Batch GUI – A New Interface For M.A.I.™ (Machine Aided Indexer) and MAIstro™ Modules

June 16, 2014  
Posted in Access Insights, Featured, metadata, semantic

Access Innovations, Inc. has announced the inclusion of the MAI Batch Graphical User Interface (GUI) as part of the recent Data Harmony Version 3.9 software update release. MAI Batch GUI is a new interface for running a full directory of files through the M.A.I. Concept Extractor. This tool enables processing of large amounts of text through the Data Harmony M.A.I. Concept Extractor with a single command. Usually used in working with legacy or archival files, it allows complete semantic enrichment of entire back files in a short time. Once run, the taxonomy terms from a thesaurus or taxonomy become part of the record itself.

“For Data Harmony Version 3.9, we decided to add the interface to the MAIstro and M.A.I. modules to allow use directly from the desktop, giving more power to the user,” remarked Marjorie M. K. Hlava, President of Access Innovations, Inc. “It’s a fast, easy way to perform machine-aided indexing on batches of documents, without any need for command-line instructions.”

“M.A.I.’s batch-indexing capability has been in place for years via command line interface,” noted Bob Kasenchak, Production Manager at Access Innovations. “This new GUI makes it really easy to use. Customers only need to open ‘MAI Batch app’ in their Data Harmony Administrative Module, choose the files or directories to process, and submit the job.”

The purpose of MAI Batch is to provide immediate processing of data files on demand. MAI Batch can be deployed to achieve rapid subject indexing of legacy text collections.

MAI Batch GUI offers semantic enrichment by extracting concepts from input text in most file formats, including the following:

  • Adobe PDFs
  • MS Word DOC files
  • HTM/HTML pages
  • RTF documents
  • XML files

For XML files, the ‘XML Tags’ option permits users to define specific XML elements for MAI Batch GUI to analyze during batch processing. This option opens the door for indexing source documents that are tagged according to different XML schemas. XML Tags also permits the exclusion during indexing of sections in the document structure, as designated by the user.

The interface’s Input and Output panes present a practical view of the batch during processing, enabling a degree of interactivity – M.A.I. is a very accessible automatic indexing system. It’s a ‘machine-aided’ software approach, even when applied to batches of documents. IT support is important but not needed to process and maintain the Data Harmony Suite of products.

When the documents already contain indexing terms, MAI Batch GUI will derive accuracy statistics for inclusion in the output, logging the statistics of indexing accuracy for the batch. M.A.I. calculates the indexing accuracy of its suggested terms from Concept Extractor compared to the previously-applied subject terms. This powerful method for enhancing the accuracy of subject indexing is based on reports generated by the M.A.I. Statistics Collector, giving a taxonomy administrator all the data needed to continually improve the results based on the system recommendations, selections, and additions.

About Access Innovations, Inc. – www.accessinn.com, www.dataharmony.com, www.taxodiary.com

Founded in 1978, Access Innovations has leveraged semantic enrichment of text for internet technology applications, master data management, database creation, thesaurus/taxonomy creation, and semantic integration. Access Innovations’ Data Harmony software includes machine aided indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet production environment needs.  Data Harmony is used by publishers, governments, and corporate clients throughout the world.

Blind Alleys, Dead Ends, and Mazes

June 9, 2014  
Posted in Access Insights, Featured, Taxonomy

“I don’t know where I am!”

mazes

 

 

 

Time traveler Clara Oswald becomes disoriented once again, in a scary encounter with a taxonomy displayed in flat format.

Taxonomies can be displayed in a variety of ways. One of the display types that we occasionally see is known as the flat format display. It’s described in the main U.S. standard for controlled vocabularies, ANSI/NISO Z39.19 (Guidelines for the Construction, Format, and Management of Monolingual Controlled Vocabularies, published by the National Information Standards Organization) as follows:

“The flat format is the most commonly used controlled vocabulary display format. It consists of all the terms arranged in alphabetical order, including their term details, and one level of BT/NT hierarchy.”

At the top level, this format might (or might not) look like that of other hierarchical vocabularies when they are collapsed. What happens, though, when you start navigating to deeper levels? Let’s take a look at the ERIC Thesaurus published by the U.S. Department of Education’s Institute of Education Sciences. Here’s the initial view, when you choose to browse the thesaurus:

eric2

 

Aha, top terms, yes? Unfortunately, no. These are non-hierarchy category labels into which the actual terms are grouped, without regard for hierarchical placement. Clicking on any of these category labels results in a flat alphabetical display of all the terms in that category. This is something that thesaurus publishers can get away with when they use a flat format display.

If you click on the first category, Agriculture and Natural Resources, you see a flat alphabetical list of terms, including Agricultural Education. Clicking on that, you would discover that its one broader term is Education (no, not Agriculture and Natural Resources), and that its one narrower term is Young Farmer Education. What you see is basically a term record, and that’s all. That’s flat format display.

age

Are there problems with this? I think so. Even if the vocabulary is viewed only by the people constructing and maintaining it, those people will have difficulty spotting gaps and redundancies. And even if the vocabulary is used only by in-house human indexers, they will have difficulty exploring it to find the most appropriate terms to apply for indexing, and they will tend to use the first terms they come across that seem to fit. In the latter scenario, the ignored terms are apt to fall victim to usage statistics, even if they’re good terms that should have been used. (I’ve seen this happen to at least one taxonomy.)

While the format may have simplified things in the days of printed taxonomies, taxonomists and indexers have problems with this format. Think of the problems encountered by searchers looking for information resources. Searchers benefit from being able to navigate and explore a taxonomy, and to take full advantage of its hierarchical structure. The flat format doesn’t present a hierarchy; instead, it presents obstacles.

Blind Alleys

blind

 

 

 

 

 

 

While you’re traveling down one path, you don’t have an opportunity to see what’s in nearby pathways, or in distant but related pathways.

Dead Ends

deadends

 

 

 

 

 

 

You can’t see where you’re headed, or how far the path goes. The path that you originally saw as promising might only lead to a stone wall, after you’ve already traveled one term at a time to get there. (Some flat format taxonomies, though, turn out to be unexpectedly shallow, so you’re more apt to hit a dead end sooner than later.)

Mazes

mazes2

 

 

 

 

 

 

 

Because you can’t see more than one level before and after the term you’re in, and you can’t see over the hedge to other pathways, you may end up zigzagging and backtracking through the taxonomy in a frustrating guessing game.

Getting a Better View

view

 

Ideally, you should be able to view the full panorama of a taxonomy’s coverage. At the same time, you should be able to focus on the areas of interest to you. And you should be able to view more than one branch at the same time, and to view entire branches. To accomplish those goals, you need a full hierarchical display that you can expand and collapse as needed. The example below is a screenshot of the MediaSleuth thesaurus, some branches of which I’ve temporarily exposed to an expanded view with a click of the mouse.

medias

 

With this kind of view, we can see our way in all directions, from wherever we are. We can see where we might want to go from there, and how to get there. We know exactly where we are.

Barbara Gilles, Taxonomist
Access Innovations

Access Innovations, Inc. Announces Release of the Semantic Fingerprinting Web Service Extension for Data Harmony Version 3.9

June 2, 2014  
Posted in Access Insights, Featured, semantic

Access Innovations, Inc. announces the Semantic Fingerprinting Web service extension as part of their Data Harmony Version 3.9 release. Semantic Fingerprinting is a managed Web service offered to scholarly publishers to disambiguate author names and affiliations by leveraging semantic metadata within an existing publishing pipeline.

The Semantic Fingerprinting Web service data mines a publisher’s document collection to build a database of named authors and affiliated institutions, and then expands the database over time with customization and administration services provided by Access Innovations during configuration. The author/affiliation database powers M.A.I.™ (Machine Aided Indexer) algorithms for matching names in new content received from contributors. During the configuration phase, an essential component is the graphical user interface (GUI) where users disambiguate unmatched names using clues that M.A.I. surfaces as a result of rigorous document analysis.

“Like a fingerprint, each author has a unique ‘semantic profile’ that captures the specific disciplines and topic areas in which they publish – reflecting subject areas covered in their body of research. Data Harmony generates subject keywords that describe the document’s content, to increase the number of author name matches a reviewer can find during editorial review of unresolved names,” explained Kirk Sanders, Access Innovations Taxonomist and Data Harmony Technical Editor.

“Semantic Fingerprinting is a versatile addition to the Data Harmony software lineup,” said Marjorie M. K. Hlava, President of Access Innovations, Inc. “Publishers can incorporate Semantic Fingerprinting to build each author’s profile, precisely reflecting that person’s research and publication achievements and institutional affiliations – all driven by information that’s already moving through the pipeline. It’s an elegant approach to data-mining a document stream for highly practical purposes, an approach presenting immediate benefits for the scholarly publisher.”

“Semantic Fingerprinting is driven by patented natural language processing algorithms,” responded Bob Kasenchak, Production Manager at Access Innovations, when asked to comment on the module’s inclusion in the Version 3.9 software update release. “The Web service enables a publisher to move far beyond adding subject metadata in their pipeline by supplementing it with the author’s research profile. This module and the process also offer a new way to improve precise document search and retrieval. Enhancements to document metadata also present opportunities to support other functions related to marketing or assigning appropriate peer reviewers.”

The Semantic Fingerprinting extension from Data Harmony 3.9 is a Web service (managed by Access Innovations) that relates terms from a publisher’s controlled vocabulary (a taxonomy or thesaurus) to the contributing authors, their affiliated institutions, and other relevant metadata information. Software components such as the user interfaces and entity-matching algorithms are adjustable, because every data set needs a targeted approach. As more data is processed by the matching algorithms and/or human editors, the name authority file and other processes require routine monitoring and adjustments. In many cases, suggestions for adjustments will come from human editors, based on questionable entities that they resolve by searching the name authority file in the Semantic Fingerprinting interface.

About Access Innovations, Inc. – www.accessinn.com, www.dataharmony.com, www.taxodiary.com

Founded in 1978, Access Innovations has extensive experience with Internet technology applications, master data management, database creation, thesaurus/taxonomy creation, and semantic integration. Access Innovations’ Data Harmony software includes machine aided indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet production environment needs.  Data Harmony is used by publishers, governments, and corporate clients throughout the world.

Putting Human Intelligence To Work To Enhance the Value of Information Assets

May 26, 2014  
Posted in Access Insights, Featured, Taxonomy

Semantic enhancement extends beyond journal article indexing, though the ability of users to easily find all the relevant articles (your assets) when searching still remains the central purpose. Now, in addition to articles, semantic “fingerprinting” is used for identifying and clustering ancillary published resources, media, events, authors, members or subscribers, and industry experts.

The system you choose to enhance the value of your assets, and the people behind it, is extraordinarily important.

It starts with a profile of your electronic collection. It may include a profile of your organization as well. As you choose the concepts that represent the areas of research today and in the past, the ideas and thoughts of your most articulate representatives, the emerging methods and technologies, you bring together a picture of the overall effort. This can be done with a thesaurus, an organized list of terms representing those concepts (taxonomy) enhanced with relationship links between terms (synonyms, related terms, web references, scope notes). The profile provides an illustration of the nature of intellectual effort being expended and, equally important, the shape of the organizational knowledge that is your key asset.

We’d like to convince you that human intelligence is still the most powerful engine driving the development and maintenance of this lexicographic profile. Technology tools help with the content mining, frequency analyses, and other measures valuable to a taxonomist, but the organization, concept expressions, and relationship building is still best done by humans.

Similarly, the application of the thesaurus is best done by humans. Because of the volume of content items being created every day, it may not be possible to have human indexers review each of them. Our automated systems can achieve perhaps 90% “accuracy” (i.e. matching what a human indexer would choose), so high-valued content is still indexed by humans, much more efficiently than in the past, but still by humans. And the balance requires the contribution of humans to inform the algorithm in actual natural (human) language. Fully enabled, the automated system produces impressive precision in identifying the “aboutness” of a piece of content.

And how can a system achieve accuracy and consistency? Our approach is to reflect the reasoning process of humans, using a set of rules. Our rule base is simple to enhance and simple to maintain, and like the thesaurus, flexible enough to accommodate new terminology in a discipline as it evolves.  About 80% of the rules work well just as initially (automatically) created. The other 20% achieve better precision when ‘touched’ by a human who adds conditions to limit, broaden, or disambiguate the use of the term triggering the rule.

Mathematical analyses work to identify statistical characteristics of a large number of items and is quite useful in making business decisions. But making decisions about meaning? For many decades now, researchers have been working to find a way to analyze natural language that would result in somewhere near the precision provided by human indexers and abstractors.  Look at IBM’s super-computer “Watson” and the years and resources invested to produce it.  It continues to miss the simple (to us) relationships between words and context that humans understand intuitively.

Mary Garcia, Systems Analyst
Access Innovations

The Size of Your Thesaurus

May 19, 2014  
Posted in Access Insights, Featured, Taxonomy

During the initial stages of discussing a new taxonomy project, I am frequently asked questions like:

How granular does my taxonomy need to be?

How many levels deep should the vocabulary go?

And especially:

How many terms should my thesaurus have?

The answer is—of course—it depends.

The smallest thesaurus project with which I’ve ever been involved was for a thesaurus of 11 terms; the largest is a 57,000-word vocabulary.

We once lost a bid because we refused to agree to build a 10,000-word thesaurus (not approximately, exactly); no matter how loudly we insisted that it’s far more logical (“best practice”) to let the data decide the size of the thesaurus, someone had already decided on an arbitrary number.

At Access Innovations, we like to say that we build “content-aware” taxonomies, that the data will tell us how large the taxonomy should be. The primary data point is the content: How much is there? What is the ongoing volume being published? Clearly, no one needs a 25,000-word thesaurus to index 1000 documents; similarly, a 200-term thesaurus is not going to be that useful if you have 800,000 journal articles.

Just as returning 2,000,000 search results is not very helpful (unless what you’re looking for is on the first page), a thesaurus term with which 20,000 articles are tagged isn’t doing that much good—more granularity is probably required. There are very likely sub-types or sub-categories of that concept that you can research and add.

The flip side is that you don’t need terms in your vocabulary—no matter how cool they may be—if there is little or no content requiring them for indexing. Your 1500-word branch of particle physics terms is just dead weight in the great psychology thesaurus you’re developing.

Other factors include the type of users you have searching your content: Are they third-graders? Professional astrophysicists? High school teachers? Reviewing search logs and interviewing users is another way to focus your approach, which in turn will help you gauge the size your taxonomy will be in the end.

Let’s make up an example (as an excuse to post pictures that are fun to look at). We’re building a taxonomy that includes some terms about furniture, including the concept Sofa.

PT        =          Sofa

NPT     =          Couch

Now, being good taxonomists, we’re obviously lightning-fast researchers, so we quickly uncover some other candidate terms:

Cabriole

Camelback

Canapé

Chesterfield

Davenport

Daybed

Divan

Empire(-style)

English Rolled Arm

Lawson

Loveseat

Settee

Tuxedo

sofa

 It looks like a real taxonomy of sofas would depend at least partly on arm height?

Whereas “couch” is clearly a synonym, these could all be narrower terms (NTs) for Sofa as they are all distinct types, styles, and sub-classes. Alternately, these could all be made NPTs for Sofa so that any occurrence of the words above would index to Sofa and be available for search, browse, etc.

How do we decide the proper course of action?

We let the content tell us.

How many articles in our imaginary corpus reference e.g. the Cabriole, Camelback, or Canapé?

  • If the answer is “none”, there’s clearly no need for this term; however, adding it as an NPT will catch any future occurrences, so we may as well be completist.
  • If the answer is “many”–some significant proportion of the total mentions of Sofa or Couch—then the term definitely merits its own place in the taxonomy.
  • If the answer is “few”—more than none, but not enough to warrant inclusion—go ahead and add it as an NPT.  You can always promote it to preferred term status later.

However—and this is a big exception—if you find through reviewing search logs that a significant number of searchers were looking for a particular term, it might signal that it’s an emerging concept, new trend, or hot topic, in which case you may decide to override the statistical analysis and err on the side of adding it to the thesaurus. It won’t hurt anything, and as long as your hierarchy is well formed and your thesaurus is rich in related terms, people will find what they’re looking for…which is, after all, the goal.

So remember: It’s not only the size of your taxonomy that’s important—it’s how relevant it is to the content and users for which it’s designed.

Bob Kasenchak, Project Coordinator
Access Innovations

Access Innovations, Inc. Announces Release of the Smart Submit Extension Module to Data Harmony Version 3.9

May 12, 2014  
Posted in Access Insights, Featured, Taxonomy

Access Innovations, Inc. has announced the Smart Submit extension module as part of their Data Harmony Version 3.9 release. Smart Submit is a Data Harmony application for integration of author-selected subject metadata information into a publishing workflow during the paper submissions or upload process. Smart Submit facilitates the addition of taxonomy terms by the author. With Smart Submit, each author provides subject metadata from the publisher taxonomy to accompany the item they are submitting. During the submission process, Data Harmony’s M.A.I. core application suggests subject terms based on a controlled vocabulary, and the author chooses appropriate terms to describe the content of their document, thus enabling early categorization and selection of peer reviewers and support for trend analysis.

“Smart Submit is an exciting addition to the Data Harmony repertoire,” said Marjorie M. K. Hlava, President of Access Innovations, Inc. “Publishers can easily incorporate Smart Submit, streamlining several steps at the beginning of their workflow, as well as semantically enriching that content at the beginning of the production process. They are getting far more benefits, and doing so without adding time and effort.”

“The approach is simple on the surface and supported by very sophisticated software,” remarked Bob Kasenchak, Production Manager at Access Innovations. “The document is indexed using the Data Harmony software, which returns a list of suggested thesaurus terms, from which the author selects appropriate terms. Smart Submit supports the creation of a ‘semantic fingerprint’ for the author, collecting additional information along with the subject metadata. Finally, the tagged content is added to the digital record to complete production and be added to the data repository. It’s an amazing system to see in action.”

Smart Submit can be implemented in several ways, including as a tool for assisting authors to self-select appropriate metadata assigned to their name and research at the point of submission into the publishing pipeline; as an editorial interface between a semantically-enriched controlled vocabulary and potential submissions to the literature corpus; for editorial review of subject indexing at the point of submission, enabling a robust evolution to a controlled vocabulary (taxonomy or thesaurus), by encouraging timely rule base refinement; for simultaneous assignment of descriptive and subject metadata by curators of document repositories for efficient integration of documents in a large collection; and as a method of tracking authors and their submissions for conference proceedings, symposia, and the like.

“This flexible system opens the door between Data Harmony software and an author submission pipeline in fascinating new ways,” commented Kirk Sanders, an Access Innovations taxonomist. “Users can choose a configuration that maximizes the gain from their organizational taxonomy, at the point it is needed most: when their authors log on to submit their documents.”

 

About Access Innovations, Inc. – www.accessinn.com, www.dataharmony.com, www.taxodiary.com

Founded in 1978, Access Innovations has extensive experience with Internet technology applications, master data management, database creation, thesaurus/taxonomy creation, and semantic integration. Access Innovations’ Data Harmony software includes machine aided indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet production environment needs.  Data Harmony is used by publishers, governments, and corporate clients throughout the world.

Hold the Mayo! A study in ambiguity

http://www.dreamstime.com/stock-image-cinco-de-may-may-5-calendar-image29230011

When we (at least those of us in Greater Mexico) hear of or read about Cinco de Mayo, there is no question in our minds that “Mayo” refers to the month of May. The preceding “Cinco de” (Spanish for “Fifth of”) pretty much clinches it. Of course, if the overall content is in Spanish, there might still might be some ambiguity about whether it is the holiday that is being referred to, or simply a date that happens to be the one after the fourth of May. (As in “Hey, what day do we get off work?” “The fourth of July, I think.”)

We can generally resolve this kind of ambiguity by the context, as can a good indexing system and a rule base associated with a taxonomy.

If you’re reading this posting, you read English. So there’s a good chance that when you read the word “mayo”, you think of the sandwich spread formerly and formally known as mayonnaise.

http://www.dreamstime.com/stock-images-mayonnaise-ingredients-image28775494

Or perhaps the famous Mayo Clinic comes to mind. If you’re an American football fan (I had to throw “American” in there to differentiate the mentioned sport from soccer), you might think of New England Patriots linebacker Jerod Mayo.

The context enables us to recognize which mayo we’re dealing with. Likewise, an indexing system might take context into account when encountering the slippery word. A really good indexing rule base might help you sort things out when you have got text about Jerod Mayo’s line of mayonnaise, the proceeds of which he is donating to the Boston (not Mayo) Clinic.

Mayo_Image3

As a person of Irish descent, I know perfectly well that that is not the end of Mayo’s spread. There is a County Mayo in Ireland, which has a few other Mayos, too.

http://www.dreamstime.com/stock-photography-famous-ashford-castle-county-mayo-ireland-image14560652

If you consult the Mayo disambiguation page in Wikipedia, you will quickly discover that Mayo goes much further than Ireland. There are Mayos of one sort or another all over the world: towns, rivers, and an assortment of other geographical entities that might easily co-exist in a taxonomy or gazetteer.

Traveling down past the geographical Mayos on the Wikipedia page, one finds the names of dozens and dozens of people, many of whom have Mayo as a first name, and many of whom have Mayo as a last name. Thank goodness the four relatively famous William Mayos have different middle names.

The final category on Wikipedia’s Mayo page is, perhaps inevitably, Other. There are quite a few Other Mayos. And what might the last one be?  Where has this journey taken us?

Mayo, the Spanish word for May”

Hold the Cinco de Mayo celebration!

http://www.dreamstime.com/stock-image-cinco-de-mayo-celebration-aztec-dancer-performs-image30873071

Barbara Gilles, Taxonomist
Access Innovations

« Previous PageNext Page »