Trees, Fractals, and Taxonomies

July 21, 2014  
Posted in Access Insights, Featured, Taxonomy

Dragon_treesImage by Solkoll,
en.wikipedia.org/wiki/Patterns_in_nature#mediaviewer/File:Dragon_trees.jpg

If you look at a branch of a typical deciduous tree, you can see that it looks like a smaller tree. Likewise, that branch branches off into smaller branches that look like even smaller trees.

This characteristic of trees is an example of what mathematicians, biologists, and systems scientists call self-similarity. Self-similar systems repeat their basic geometry at smaller and smaller scales, creating multiple miniatures of themselves at different scales. In general, natural and mathematical systems in which self-similarity results in complex and detailed patterns are referred to as fractal systems.

Many natural phenomena are or can be fractal:

snowflakes,

12armSnowflake2004UTbr

Photo of a 12-sided snowflake by Becky Ramotowski,
www.srh.noaa.gov/abq/?n=features_snowflake

ocean waves,

Mount-Fuji-Seen-Below-a-Wave-at-Kanagawa

Painting by Katsushika Hokusai,
www.katsushikahokusai.org/Mount-Fuji-Seen-Below-a-Wave-at-Kanagawa.html
/CC BY-NC-ND 3.0

and even broccoli.

640px-Fractal_Broccoli

Photo by Jon Sullivan,
en.wikipedia.org/wiki/Romanesco_broccoli#mediaviewer/File:Fractal_Broccoli.jpg

Trees are loosely fractal. While the trunks don’t keep replicating, the branches do. As the Fractal Explorer observes:

If you don’t know anything about fractals a tree might seem as a very random object. No patterns, no rules. But if you know something about fractals and look closer you can see that basically a tree is a trunk with trees on it. That is a basic pattern that every tree follows.

Taxonomies are often described as taxonomic trees, or as having a tree-like structure. To carry the analogy further, we often refer to the progressively more specific and more numerous hierarchical subdivisions in a taxonomy as branches. The overall domain of a taxonomy, while sometimes referred to as its root, might also be viewed as its trunk.

So this begs the question: Are taxonomies fractal? As it turns out, several authors have written articles on the fractal nature of biological genus-and-species taxonomies. These articles discuss the branching characteristics of these taxonomies, the same branching characteristics that we see in taxonomies outside the realm of biological species categorization. They also discuss the mathematical tendencies of the proportions of the various branches, tendencies that could perhaps be a natural result of the degree to which things in a group need to be different before we find it appropriate to give them different names.

In recent years, interdisciplinary scientists such as Christophe Eloy have been studying the natural forces that make trees grow the way they do, and how their growth patterns might make them resilient in windstorms. Interestingly enough, these scientists have been inspired, in part, by an observation that another person with an interdisciplinary approach, Leonardo da Vinci, made 500 years ago.

As Joe Palca explains in “The Wisdom Of Trees (Leonardo Da Vinci Knew It)”:

Leonardo noticed that when trees branch, smaller branches have a precise, mathematical relationship to the branch from which they sprang. Many people have verified Leonardo’s rule, as it’s known, but no one had a good explanation for it. …

Leonardo’s rule is fairly simple, but stating it mathematically is a bit, well, complicated. Eloy did his best:

“When a mother branch branches in two daughter branches, the diameters are such that the surface areas of the two daughter branches, when they sum up, is equal to the area of the mother branch.”

Translation: The surface areas of the two daughter branches add up to the surface area of the mother branch.

Here’s another explanation, from Esther Inglis-Arkell’s article “Scientists Still Puzzled by a Fractal Discovered 500 Years Ago”, that might be more intuitive:

Strip the leaves off of the average tree, soak the whole thing in water until it gets mushy, bundle the branches up together, and you’ll get what looks like one long trunk. That’s what Leonardo Da Vinci said in the fifteen hundreds. If a tree trunk splits off into three main branches, each of the branches will be one third the size of the trunk. When each of those branches splits into three again, making nine branches on the second ‘tier’ of the tree, each of these second tier branches will be one ninth the side of the trunk. As the branches grow and split, they will always be a particular fraction of the size of the trunk, and adding together all the fractional bits of each ‘tier’ of branches will always add up to ‘one trunk.’ This isn’t the case in all trees, but the majority hold to this pattern.

Can we gain a new perspective on taxonomies from all this? I think the lesson might have to do with scope, specificity, and detail. According to da Vinci’s observation, tree branches uniformly become ever thinner until they taper off, yet their total bulk at most levels of the tree will be approximately the same. So, in a taxonomy that grows naturally, we might expect that the terms at any given depth might be at approximately the same level of specificity. At the same time, their individual scopes at any given depth will add up to a sum total that will ideally (I think) cover the same scope as the top level of terms. As with trees branches tapering off, though, this will be less true as the taxonomy branches naturally taper off and end at the most specific levels.

Inglis-Arkell sums up with some interesting observations about the beauty of branches:

This pattern of growth has a mathematical, as well as physical, beauty. Trees are natural fractals, patterns that repeat smaller and smaller copies of themselves. Each tree branch, from the trunk to the tips, is a copy of the one that came before it. Branches split off from the highest tip the same way they do from the trunk, and set of branches splits off at the same angle to each other. Physics, math, and biology come together to create the simplest and most efficient growth pattern. It just took Leonardo Da Vinci to first notice it, the big show-off.

 Barbara Gilles, Taxonomist
Access Innovations

Thesaurus evolution – a case study in “Synthetic biology”

July 14, 2014  
Posted in Access Insights, Featured, Taxonomy

The following post, by Rachel Drysdale, originally appeared in PLOS BLOGS on April 8, 2014.

Science does not stand still and neither does the PLOS thesaurus. With more than 10,700 Subject Area terms, we use the thesaurus to index our articles and provide useful links to related papers, enhanced search functions, and, for PLOS ONE (more than 90 articles published every day!), customizable Subject Area-based email alerts and Subject Area landing pages.

Sometimes we decide to renovate a sector of the thesaurus to better reflect the make-up of the PLOS corpus. For example, we’ve long had a Subject Area term for “Synthetic biology,” sitting beneath “Biology and life sciences.” We even have a healthy Synthetic Biology Collection. However, the Subject Area term “Synthetic biology” was being applied to only a handful of articles despite the fact that many more PLOS articles were about synthetic biology and should ideally have been indexed accordingly. Why was this?

Part of the explanation is that ‘synthetic biology’ is not a phrase that is frequently used in natural language. So whereas an article about hypertension may use the word ‘hypertension’ 26 times within the text, an article about synthetic biology might state ‘synthetic biology’ rarely, if at all. This poses a challenge to the Machine Aided Indexing process which assigns Subject Areas to articles based on the frequency of matches in the text.

The way around this is to introduce a level of abstraction to the rulebase that governs the Machine Aided Indexing. The base rules are very literal: “if I see ‘synthetic biology’ in the text I’m going to use the ‘Synthetic biology’ Subject Area term.” But there are additional words and phrases that are diagnostic of synthetic biology topics, such as “biobricks” and “Registry of Standard Biological Parts.” Adding rules for these terms – for example “if I see ‘Registry of Standard Biological Parts’ in the text I’m going to use ‘Synthetic biology’” – increases the frequency of indexing to “Synthetic biology” and thus the retrieval of relevant articles in our searches.

A second factor is to do with the hierarchical structure of the thesaurus – an especially important factor given that our search functionality is designed to utilize this hierarchy. For example, a Subject search for “Vascular medicine,” beneath which Hypertension sits, retrieves articles indexed specifically with Hypertension, even if they have not been explicitly tagged with “Vascular medicine.” In earlier versions of the PLOS thesaurus “Synthetic biology” had no narrower terms, and this was doing it no favours with regard to how useful it was for retrieving relevant articles. We therefore reviewed essays about synthetic biology, scope descriptions from relevant institutional and departmental web sites, and proceedings from synthetic biology conferences, all in light of the content of our articles, and introduced new, narrower terms to sit beneath our existing “Synthetic biology” where that made sense.  So we went from having the single “Synthetic biology” term to the new structure of 30 terms in one renovation.  Here is what we have now:

synbio_crop

Much of the evolution of the PLOS thesaurus is gradual, as for example when we realised that “puma” can be used as an abbreviation for “p53 upregulated modulator of apoptosis” as well as a kind of big cat, or learned that asteroids can be starfish. Dealing with these indexing missteps requires small-scale changes to specific rules. But sometimes the change needs to be more radical. Our new “Synthetic biology” sector was implemented in Ambra 2.9.12 (released March 26th, 2014). Where previously only a handful of articles was indexed with “Synthetic biology,” now a Subject search across all PLOS journals retrieves over 400 “Synthetic biology” articles – much more fitting for this important and developing field.

For more about the work PLOS is doing with Synthetic biology see “An Invitation to Contribute to the Second Life of the Synthetic Biology Collection.”

Access Innovations, Inc. Now Accepting Presentation Abstracts for the Eleventh Annual Data Harmony Users Group Meeting

July 7, 2014  
Posted in Access Insights, Featured, Taxonomy

Access Innovations, Inc. is pleased to announce the Call for Presentations for the 2015 Data Harmony Users Group (DHUG) meeting. The annual DHUG meeting is held every February at Access Innovations company headquarters in Albuquerque, New Mexico. DHUG 2015 is the eleventh annual meeting and will focus on leveraging of taxonomies and tagged data, techniques for integrating tagged data flows into production cycles, and inventive ways to improve the user experience.

The theme for the meeting, “Beyond Subject Metadata, or, So you have a Taxonomy!… now what?” urges Data Harmony users to ask questions such as the following:

  • What do I do now that my content is tagged?
  • How do I integrate that tagged content into my workflow or production cycle?
  • How can I get my newly-tagged content in front of my users?
  • How can I improve the search experience for my users who want to access these information assets?
  • Are there other features I can add based on the metadata tagging now in place?
  • What other implementations can I set up to capitalize on content objects organized around my taxonomy?

For the first time, Data Harmony users can now submit presentation proposals using the company’s Smart Submit software extension module, at http://www.dataharmony.com/dhug/submissions. The system is a full working implementation of the module and demonstrates how easy it is to use. The deadline for inclusion in the preliminary program is September 20, 2014.

In the DHUG 2015 implementation of Smart Submit, the first screen includes fields for entering such information as title, creator (author or presenter, usually a DHUG member), abstract, contact information, and a brief biography of the presenter. Optionally, the user may choose to upload a PDF or Microsoft Word file. There are also some fields customized for the meeting organizer, such as on what day of the week a presenter would prefer to be scheduled, and how long his/her presentation will be.

In the second screen, Smart Submit uses Data Harmony’s M.A.I.(TM) (Machine Aided Indexer) software module to display  suggested indexing terms from the Access Innovations thesaurus to characterize the presentation. M.A.I. bases its automated indexing assistance on the text in the title, the abstract, and any PDF or Microsoft Word document that was uploaded via the first screen. The presenter chooses to retain or remove each of the suggested terms and may add additional terms from the thesaurus. The system also allows for searching the thesaurus and adding terms from the search results view.

“This is an exciting addition to the DHUG meeting planning process,” remarked Heather Kotula, Marketing Coordinator for Access Innovations. “We made it a priority to showcase our own software this year. Using Smart Submit to collect presentation proposals is going to make my job of organizing the meeting easier, faster, more complete, and more accurate.”

DHUG registration includes breakfast, lunch, and breaks with refreshments for all five days of the meeting, February 16th-20th, 2015. A networking reception will be held Monday evening at the University/Midtown Hampton Inn. On Tuesday evening, dinner will be provided for all attendees at a unique Albuquerque attraction. The University/Midtown Hampton Inn is the primary DHUG meeting hotel, offering a $79 nightly rate for members.

For more information about DHUG 2015, please visit http://www.dataharmony.com/dhug/dhug2015.

Inline Tagging – What’s to Know?

June 30, 2014  
Posted in Access Insights, Featured, metadata, Taxonomy

Data Harmony released their Inline Tagging Web service extension recently – let’s talk about inline tagging software and information environments well-suited to benefit.

Web developers are implementing inline tagging software in an increasing variety of information environments, spurred on by the creativity of users requesting new features based on accurate placement of inline tags. And it’s probably safe to say many users aren’t aware it’s inline tagging that propels some of the innovations they enjoy in their graphical user interface (GUI)… at the level of the onscreen text.

Data Harmony recently released their Inline Tagging Web service as one of the Version 3.9 ‘extension modules’ – causing me to wonder:

  • What kinds of Web computing environments are well-suited for leveraging subject tags at the level of inline text?
  • What is inline tagging good for? What can a subject tag accomplish when it’s been matched to a specific word’s location in the input text?
  • What is the Data Harmony development team’s vision for implementation of the Inline Tagging extension?
  • Can tags other than subject indexing terms be deployed for inline tagging?

To begin at the end of the tale, the answer to the last question is ‘Yes’ – geographical terms and other non-subject tags can be deployed for inline tagging, since inline tags are based on accurate indexing, which in turn is reliant on controlled vocabularies.

Controlled vocabularies such as taxonomies and thesauri can store terms like place names and other kinds of terms that don’t capture strictly conceptual information. Rather, they serve as an authority file for other forms of information, for example, geographical. Inline tagging applications can match non-conceptual terms also, during analysis of input text, and be configured to extend functionality for a purpose like linking to a geographical database for supporting information. For example, if ‘Canada’ were matched in the text, inline tagging might activate a mouse over window that offers the user a chance to go look at a relevant entry from an atlas, or encyclopedia. If the user chooses to click on the word ‘Canada’ in the text, a new interface tab opens to the relevant entry.

Guess what I discovered on taking my questions to the Data Harmony 3.9 developers… implementation ideas!

As a tool for search engines to boost the results of document search and retrieval

When a tag is included inline in a text object found by a search engine, words immediately around the tag (or the entire sentence) can be returned to the search engine, to supplement search results by providing context information about the match’s location in the found document.

The capability to return search term matches along with their context is significant in publications with multiple sections or chapters, to permit easier division into identifiable sections and subsections. Many publishers now offer content for sale in smaller pieces, so each customer can put together a ‘customized electronic book’ by combining chunks from different sources. Search and retrieval in publication collections retrieves relevant sections and subsections for recombination into new content objects. Accurate inline tagging facilitates this highly effective search strategy.

To turn up the volume of social media postings

Inline tagging can add value to search and retrieval within social media communities, increasing the gain of metadata information that’s already there in posts! You can use it for better categorization and linking related Twitter ‘tweets,’ professional discussions, social issue blogs and closed community forums (chat rooms) – for turning up the volume!

A well-placed inline tag inside a blog entry offers a semantic hook for Web applications to latch onto: blog postings can be followed within a certain date range only, or sent to designated recipients automatically when contributors write about any subject of definite interest.

As a lexicological training tool

Inline tagging methods can provide information for a language learner or human indexer about the meaning, form, and usage of words, while keeping the context in view.

In XML databases

XML databases often build indexes of searchable data by polling, at incredible speeds, all text in all available XML files – even for millions of records – and storing results in a repository. Inline tagging offers an alternative to the traditional polling method that often serves as the foundation for document search and retrieval in an XML database. Inline tagging methods enable you to describe fields with unique inline XML tags, for later recognition and retrieval by the spidering engines. Learn more.

Kirk Sanders, Editorial Services
Project Manager, Access Innovations

Rule Base Solutions

People often ask us how much time it will take to manage a rule base with Data Harmony software. We reply with specific customer experience numbers and tell them a few hours per month of editorial time to maintain both the thesaurus and the rule base. One customer of ours, the American Institute of Physics, found that maintaining their thesaurus and rule base takes less than 15 hours per month for 2000 articles per week throughput. Another customer, The Weather Channel, manages breaking news all day long with four hours per month of maintenance. It takes the editorial team just a few hours per month to keep up with the changing trends and events within their field and transfer those into the organizational knowledge base represented by the M.A.I.™ rule base. This is a small investment that provides the organization with the highest level of accuracy in coding (usually well over 90% hits without human intervention), as well as to support analysis of the trends in the business, the creation of author profiles, semantic fingerprints of the entire organizational holdings, and extraction of real meaning for all the data. Other customers, such as IEEE and the US GAO, find the accuracy of their Data Harmony software implementations so high that they now only sample the data periodically to glean new terms and trends. They do not see the need to review every single item.

The real question, though, should be a matter of control. If a rule-based solution maintained by the editorial staff is the approach taken, then full control remains with the editorial department. If a programmatic learning system – the seductive call of the purely automatic system – is the choice, then oversight either remains with the vendor or moves to the IT (information technology) department. The lower accuracy of the indexing returns (usually in the 60% range) means much more time spent by the editorial department on the production of the taxonomy tagged items. The time that would have been spent improving the knowledge base is instead spent in production time processing records, due to lower accuracy levels.

Here’s an example:  let’s assume 1000 articles per month. Using 90% accuracy versus 60% accuracy, how much extra production time is involved?  Let’s also suppose, for easy calculations, that there are 10 terms per article. If our rule base indexing is 90% accurate, then only one term will need to be reviewed, researched, and replaced or discarded. If alternative indexing methods produce 60% accuracy, then there are four terms per record to research, replace, or discard. The time to research a term and decide on its disposition is conservatively two minutes. So two minutes per term at 1 term per article is just 33.3 hours per month. But if four terms (60% accuracy) need reviewing, then 133.3 editorial hours per month are needed – obviously, four times the effort.  Moreover, the rule base improves over time with this small editorial input, so the maintenance time continues to decrease.

A statistical approach can appear to be a gift on a silver platter, but beware – such an approach means more time spent on production, less on building a knowledge base, lower accuracy, higher throughput costs, and no chance to learn about the data through semantic fingerprinting. To make matters even more frustrating, you have little control of the system. It has to be improved and worked on by the vendor or the IT department. New terms require a full revamping of the system each time, resulting in costly delays, rather than the real-time, instant updates that a system based on Java object-oriented programming allows. As a result, the taxonomy is not responsive to the organization’s data.

It is tempting to think that the classification of content can be done without the use of a vetted taxonomy properly applied or that the taxonomy only provides a convenient file folder naming convention. Unfortunately, the cost is high to make that choice. The accuracy is lower, the throughput is slower, and the clerical aspect of the indexing process is increased when you use a statistical system. In addition, control is no longer with the editorial department, but shifted to IT and the vendor. The power dynamic of the choice is clear: IT versus editorial. Who do you want to be in control of your indexing?

Marjorie M.K. Hlava
President, Access Innovations

Data Harmony Version 3.9 Includes MAI Batch GUI – A New Interface For M.A.I.™ (Machine Aided Indexer) and MAIstro™ Modules

June 16, 2014  
Posted in Access Insights, Featured, metadata, semantic

Access Innovations, Inc. has announced the inclusion of the MAI Batch Graphical User Interface (GUI) as part of the recent Data Harmony Version 3.9 software update release. MAI Batch GUI is a new interface for running a full directory of files through the M.A.I. Concept Extractor. This tool enables processing of large amounts of text through the Data Harmony M.A.I. Concept Extractor with a single command. Usually used in working with legacy or archival files, it allows complete semantic enrichment of entire back files in a short time. Once run, the taxonomy terms from a thesaurus or taxonomy become part of the record itself.

“For Data Harmony Version 3.9, we decided to add the interface to the MAIstro and M.A.I. modules to allow use directly from the desktop, giving more power to the user,” remarked Marjorie M. K. Hlava, President of Access Innovations, Inc. “It’s a fast, easy way to perform machine-aided indexing on batches of documents, without any need for command-line instructions.”

“M.A.I.’s batch-indexing capability has been in place for years via command line interface,” noted Bob Kasenchak, Production Manager at Access Innovations. “This new GUI makes it really easy to use. Customers only need to open ‘MAI Batch app’ in their Data Harmony Administrative Module, choose the files or directories to process, and submit the job.”

The purpose of MAI Batch is to provide immediate processing of data files on demand. MAI Batch can be deployed to achieve rapid subject indexing of legacy text collections.

MAI Batch GUI offers semantic enrichment by extracting concepts from input text in most file formats, including the following:

  • Adobe PDFs
  • MS Word DOC files
  • HTM/HTML pages
  • RTF documents
  • XML files

For XML files, the ‘XML Tags’ option permits users to define specific XML elements for MAI Batch GUI to analyze during batch processing. This option opens the door for indexing source documents that are tagged according to different XML schemas. XML Tags also permits the exclusion during indexing of sections in the document structure, as designated by the user.

The interface’s Input and Output panes present a practical view of the batch during processing, enabling a degree of interactivity – M.A.I. is a very accessible automatic indexing system. It’s a ‘machine-aided’ software approach, even when applied to batches of documents. IT support is important but not needed to process and maintain the Data Harmony Suite of products.

When the documents already contain indexing terms, MAI Batch GUI will derive accuracy statistics for inclusion in the output, logging the statistics of indexing accuracy for the batch. M.A.I. calculates the indexing accuracy of its suggested terms from Concept Extractor compared to the previously-applied subject terms. This powerful method for enhancing the accuracy of subject indexing is based on reports generated by the M.A.I. Statistics Collector, giving a taxonomy administrator all the data needed to continually improve the results based on the system recommendations, selections, and additions.

About Access Innovations, Inc. – www.accessinn.com, www.dataharmony.com, www.taxodiary.com

Founded in 1978, Access Innovations has leveraged semantic enrichment of text for internet technology applications, master data management, database creation, thesaurus/taxonomy creation, and semantic integration. Access Innovations’ Data Harmony software includes machine aided indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet production environment needs.  Data Harmony is used by publishers, governments, and corporate clients throughout the world.

Blind Alleys, Dead Ends, and Mazes

June 9, 2014  
Posted in Access Insights, Featured, Taxonomy

“I don’t know where I am!”

mazes

 

 

 

Time traveler Clara Oswald becomes disoriented once again, in a scary encounter with a taxonomy displayed in flat format.

Taxonomies can be displayed in a variety of ways. One of the display types that we occasionally see is known as the flat format display. It’s described in the main U.S. standard for controlled vocabularies, ANSI/NISO Z39.19 (Guidelines for the Construction, Format, and Management of Monolingual Controlled Vocabularies, published by the National Information Standards Organization) as follows:

“The flat format is the most commonly used controlled vocabulary display format. It consists of all the terms arranged in alphabetical order, including their term details, and one level of BT/NT hierarchy.”

At the top level, this format might (or might not) look like that of other hierarchical vocabularies when they are collapsed. What happens, though, when you start navigating to deeper levels? Let’s take a look at the ERIC Thesaurus published by the U.S. Department of Education’s Institute of Education Sciences. Here’s the initial view, when you choose to browse the thesaurus:

eric2

 

Aha, top terms, yes? Unfortunately, no. These are non-hierarchy category labels into which the actual terms are grouped, without regard for hierarchical placement. Clicking on any of these category labels results in a flat alphabetical display of all the terms in that category. This is something that thesaurus publishers can get away with when they use a flat format display.

If you click on the first category, Agriculture and Natural Resources, you see a flat alphabetical list of terms, including Agricultural Education. Clicking on that, you would discover that its one broader term is Education (no, not Agriculture and Natural Resources), and that its one narrower term is Young Farmer Education. What you see is basically a term record, and that’s all. That’s flat format display.

age

Are there problems with this? I think so. Even if the vocabulary is viewed only by the people constructing and maintaining it, those people will have difficulty spotting gaps and redundancies. And even if the vocabulary is used only by in-house human indexers, they will have difficulty exploring it to find the most appropriate terms to apply for indexing, and they will tend to use the first terms they come across that seem to fit. In the latter scenario, the ignored terms are apt to fall victim to usage statistics, even if they’re good terms that should have been used. (I’ve seen this happen to at least one taxonomy.)

While the format may have simplified things in the days of printed taxonomies, taxonomists and indexers have problems with this format. Think of the problems encountered by searchers looking for information resources. Searchers benefit from being able to navigate and explore a taxonomy, and to take full advantage of its hierarchical structure. The flat format doesn’t present a hierarchy; instead, it presents obstacles.

Blind Alleys

blind

 

 

 

 

 

 

While you’re traveling down one path, you don’t have an opportunity to see what’s in nearby pathways, or in distant but related pathways.

Dead Ends

deadends

 

 

 

 

 

 

You can’t see where you’re headed, or how far the path goes. The path that you originally saw as promising might only lead to a stone wall, after you’ve already traveled one term at a time to get there. (Some flat format taxonomies, though, turn out to be unexpectedly shallow, so you’re more apt to hit a dead end sooner than later.)

Mazes

mazes2

 

 

 

 

 

 

 

Because you can’t see more than one level before and after the term you’re in, and you can’t see over the hedge to other pathways, you may end up zigzagging and backtracking through the taxonomy in a frustrating guessing game.

Getting a Better View

view

 

Ideally, you should be able to view the full panorama of a taxonomy’s coverage. At the same time, you should be able to focus on the areas of interest to you. And you should be able to view more than one branch at the same time, and to view entire branches. To accomplish those goals, you need a full hierarchical display that you can expand and collapse as needed. The example below is a screenshot of the MediaSleuth thesaurus, some branches of which I’ve temporarily exposed to an expanded view with a click of the mouse.

medias

 

With this kind of view, we can see our way in all directions, from wherever we are. We can see where we might want to go from there, and how to get there. We know exactly where we are.

Barbara Gilles, Taxonomist
Access Innovations

Access Innovations, Inc. Announces Release of the Semantic Fingerprinting Web Service Extension for Data Harmony Version 3.9

June 2, 2014  
Posted in Access Insights, Featured, semantic

Access Innovations, Inc. announces the Semantic Fingerprinting Web service extension as part of their Data Harmony Version 3.9 release. Semantic Fingerprinting is a managed Web service offered to scholarly publishers to disambiguate author names and affiliations by leveraging semantic metadata within an existing publishing pipeline.

The Semantic Fingerprinting Web service data mines a publisher’s document collection to build a database of named authors and affiliated institutions, and then expands the database over time with customization and administration services provided by Access Innovations during configuration. The author/affiliation database powers M.A.I.™ (Machine Aided Indexer) algorithms for matching names in new content received from contributors. During the configuration phase, an essential component is the graphical user interface (GUI) where users disambiguate unmatched names using clues that M.A.I. surfaces as a result of rigorous document analysis.

“Like a fingerprint, each author has a unique ‘semantic profile’ that captures the specific disciplines and topic areas in which they publish – reflecting subject areas covered in their body of research. Data Harmony generates subject keywords that describe the document’s content, to increase the number of author name matches a reviewer can find during editorial review of unresolved names,” explained Kirk Sanders, Access Innovations Taxonomist and Data Harmony Technical Editor.

“Semantic Fingerprinting is a versatile addition to the Data Harmony software lineup,” said Marjorie M. K. Hlava, President of Access Innovations, Inc. “Publishers can incorporate Semantic Fingerprinting to build each author’s profile, precisely reflecting that person’s research and publication achievements and institutional affiliations – all driven by information that’s already moving through the pipeline. It’s an elegant approach to data-mining a document stream for highly practical purposes, an approach presenting immediate benefits for the scholarly publisher.”

“Semantic Fingerprinting is driven by patented natural language processing algorithms,” responded Bob Kasenchak, Production Manager at Access Innovations, when asked to comment on the module’s inclusion in the Version 3.9 software update release. “The Web service enables a publisher to move far beyond adding subject metadata in their pipeline by supplementing it with the author’s research profile. This module and the process also offer a new way to improve precise document search and retrieval. Enhancements to document metadata also present opportunities to support other functions related to marketing or assigning appropriate peer reviewers.”

The Semantic Fingerprinting extension from Data Harmony 3.9 is a Web service (managed by Access Innovations) that relates terms from a publisher’s controlled vocabulary (a taxonomy or thesaurus) to the contributing authors, their affiliated institutions, and other relevant metadata information. Software components such as the user interfaces and entity-matching algorithms are adjustable, because every data set needs a targeted approach. As more data is processed by the matching algorithms and/or human editors, the name authority file and other processes require routine monitoring and adjustments. In many cases, suggestions for adjustments will come from human editors, based on questionable entities that they resolve by searching the name authority file in the Semantic Fingerprinting interface.

About Access Innovations, Inc. – www.accessinn.com, www.dataharmony.com, www.taxodiary.com

Founded in 1978, Access Innovations has extensive experience with Internet technology applications, master data management, database creation, thesaurus/taxonomy creation, and semantic integration. Access Innovations’ Data Harmony software includes machine aided indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet production environment needs.  Data Harmony is used by publishers, governments, and corporate clients throughout the world.

Putting Human Intelligence To Work To Enhance the Value of Information Assets

May 26, 2014  
Posted in Access Insights, Featured, Taxonomy

Semantic enhancement extends beyond journal article indexing, though the ability of users to easily find all the relevant articles (your assets) when searching still remains the central purpose. Now, in addition to articles, semantic “fingerprinting” is used for identifying and clustering ancillary published resources, media, events, authors, members or subscribers, and industry experts.

The system you choose to enhance the value of your assets, and the people behind it, is extraordinarily important.

It starts with a profile of your electronic collection. It may include a profile of your organization as well. As you choose the concepts that represent the areas of research today and in the past, the ideas and thoughts of your most articulate representatives, the emerging methods and technologies, you bring together a picture of the overall effort. This can be done with a thesaurus, an organized list of terms representing those concepts (taxonomy) enhanced with relationship links between terms (synonyms, related terms, web references, scope notes). The profile provides an illustration of the nature of intellectual effort being expended and, equally important, the shape of the organizational knowledge that is your key asset.

We’d like to convince you that human intelligence is still the most powerful engine driving the development and maintenance of this lexicographic profile. Technology tools help with the content mining, frequency analyses, and other measures valuable to a taxonomist, but the organization, concept expressions, and relationship building is still best done by humans.

Similarly, the application of the thesaurus is best done by humans. Because of the volume of content items being created every day, it may not be possible to have human indexers review each of them. Our automated systems can achieve perhaps 90% “accuracy” (i.e. matching what a human indexer would choose), so high-valued content is still indexed by humans, much more efficiently than in the past, but still by humans. And the balance requires the contribution of humans to inform the algorithm in actual natural (human) language. Fully enabled, the automated system produces impressive precision in identifying the “aboutness” of a piece of content.

And how can a system achieve accuracy and consistency? Our approach is to reflect the reasoning process of humans, using a set of rules. Our rule base is simple to enhance and simple to maintain, and like the thesaurus, flexible enough to accommodate new terminology in a discipline as it evolves.  About 80% of the rules work well just as initially (automatically) created. The other 20% achieve better precision when ‘touched’ by a human who adds conditions to limit, broaden, or disambiguate the use of the term triggering the rule.

Mathematical analyses work to identify statistical characteristics of a large number of items and is quite useful in making business decisions. But making decisions about meaning? For many decades now, researchers have been working to find a way to analyze natural language that would result in somewhere near the precision provided by human indexers and abstractors.  Look at IBM’s super-computer “Watson” and the years and resources invested to produce it.  It continues to miss the simple (to us) relationships between words and context that humans understand intuitively.

Mary Garcia, Systems Analyst
Access Innovations

The Size of Your Thesaurus

May 19, 2014  
Posted in Access Insights, Featured, Taxonomy

During the initial stages of discussing a new taxonomy project, I am frequently asked questions like:

How granular does my taxonomy need to be?

How many levels deep should the vocabulary go?

And especially:

How many terms should my thesaurus have?

The answer is—of course—it depends.

The smallest thesaurus project with which I’ve ever been involved was for a thesaurus of 11 terms; the largest is a 57,000-word vocabulary.

We once lost a bid because we refused to agree to build a 10,000-word thesaurus (not approximately, exactly); no matter how loudly we insisted that it’s far more logical (“best practice”) to let the data decide the size of the thesaurus, someone had already decided on an arbitrary number.

At Access Innovations, we like to say that we build “content-aware” taxonomies, that the data will tell us how large the taxonomy should be. The primary data point is the content: How much is there? What is the ongoing volume being published? Clearly, no one needs a 25,000-word thesaurus to index 1000 documents; similarly, a 200-term thesaurus is not going to be that useful if you have 800,000 journal articles.

Just as returning 2,000,000 search results is not very helpful (unless what you’re looking for is on the first page), a thesaurus term with which 20,000 articles are tagged isn’t doing that much good—more granularity is probably required. There are very likely sub-types or sub-categories of that concept that you can research and add.

The flip side is that you don’t need terms in your vocabulary—no matter how cool they may be—if there is little or no content requiring them for indexing. Your 1500-word branch of particle physics terms is just dead weight in the great psychology thesaurus you’re developing.

Other factors include the type of users you have searching your content: Are they third-graders? Professional astrophysicists? High school teachers? Reviewing search logs and interviewing users is another way to focus your approach, which in turn will help you gauge the size your taxonomy will be in the end.

Let’s make up an example (as an excuse to post pictures that are fun to look at). We’re building a taxonomy that includes some terms about furniture, including the concept Sofa.

PT        =          Sofa

NPT     =          Couch

Now, being good taxonomists, we’re obviously lightning-fast researchers, so we quickly uncover some other candidate terms:

Cabriole

Camelback

Canapé

Chesterfield

Davenport

Daybed

Divan

Empire(-style)

English Rolled Arm

Lawson

Loveseat

Settee

Tuxedo

sofa

 It looks like a real taxonomy of sofas would depend at least partly on arm height?

Whereas “couch” is clearly a synonym, these could all be narrower terms (NTs) for Sofa as they are all distinct types, styles, and sub-classes. Alternately, these could all be made NPTs for Sofa so that any occurrence of the words above would index to Sofa and be available for search, browse, etc.

How do we decide the proper course of action?

We let the content tell us.

How many articles in our imaginary corpus reference e.g. the Cabriole, Camelback, or Canapé?

  • If the answer is “none”, there’s clearly no need for this term; however, adding it as an NPT will catch any future occurrences, so we may as well be completist.
  • If the answer is “many”–some significant proportion of the total mentions of Sofa or Couch—then the term definitely merits its own place in the taxonomy.
  • If the answer is “few”—more than none, but not enough to warrant inclusion—go ahead and add it as an NPT.  You can always promote it to preferred term status later.

However—and this is a big exception—if you find through reviewing search logs that a significant number of searchers were looking for a particular term, it might signal that it’s an emerging concept, new trend, or hot topic, in which case you may decide to override the statistical analysis and err on the side of adding it to the thesaurus. It won’t hurt anything, and as long as your hierarchy is well formed and your thesaurus is rich in related terms, people will find what they’re looking for…which is, after all, the goal.

So remember: It’s not only the size of your taxonomy that’s important—it’s how relevant it is to the content and users for which it’s designed.

Bob Kasenchak, Project Coordinator
Access Innovations

« Previous PageNext Page »