Registration is now open for the 11th annual Data Harmony Users Group (DHUG) meeting, scheduled for February 16-20, 2015 at the Access Innovations, Inc. offices at 4725 Indian School Road Northeast, in Albuquerque, New Mexico.
Access Innovations, the developer and seller of the Data Harmony product line, is hosting the meeting. Several new features and options will be unveiled during the Annual Features Update report, including an introduction to the new graphical user interface (GUI).
A full day of training on building taxonomies is included on Monday, February 16, 2015. The Annual Features Update report will be presented by Access Innovations President Marjorie M.K. Hlava on Tuesday morning from 9:00 a.m. to noon. On Tuesday afternoon and on Wednesday, Data Harmony users will present case studies detailing their implementations of the software. Two full days of hands-on software training sessions led by Access Innovations and Data Harmony staff members are scheduled for Thursday, February 19, and Friday, February 20.
The meeting also includes a Monday evening networking reception and a networking dinner on Tuesday evening. The Tuesday evening dinner will be held at the Indian Pueblo Cultural Center, a museum and educational center dedicated to preserving and perpetuating Pueblo culture and to advancing understanding, by presenting, with dignity and respect, the accomplishments and evolving history of the Pueblo people of New Mexico.
By hosting the meeting at the Access Innovations home office, the entire staff will be able to participate in discussions. “Each year, this meeting provides our members an opportunity to share ideas and address issues and methodologies with colleagues,” said Ms. Hlava. “We enjoy talking with our clients and finding out what items are on their wish lists for future software developments, and the new releases reflect those requests.”
Also, members have the opportunity to discuss technical and tactical issues with Access Innovations staff in person. “Just by sitting down together, we can work through key issues quickly and to everyone’s benefit,” said Bob Kasenchak, Production Coordinator at Access Innovations. “Even little questions that come up during these discussions can get resolved – questions that don’t seem important enough to bring up during conference calls or in email correspondence.”
To register for the meeting, go to www.dataharmony.com/dhug/regform/.
For information about planning a trip to Albuquerque for the meeting, go to www.dataharmony.com/dhug/dhugtripplanning/.
To see the provisional agenda, go to www.dataharmony.com/wp-content/uploads/2014/10/Agenda-2015.pdf.
“Time is free, but it’s priceless. You can’t own it, but you can use it. You can’t keep it, but you can spend it. Once you’ve lost it you can never get it back.” So muses American businessman Harvey MacKay.
We have no choice in the matter. Time cannot be “saved” …only spent. Our responsibility is to determine how we wish to allocate it. Otherwise, time will not only be spent but also wasted. How valuable, then, are those skills and tools that help us distribute our time in ways that we consider most useful and productive!
Shiyali Ramamrita Ranganathan proposed the fourth law of library science to be: “Save the time of the reader.” Fast and accurate retrieval of relevant information is one of the fundamental arguments in favor of enterprise taxonomy development and usage. Let’s consider someactive search strategies that will help you avoid tail-chasing and wearying labyrinths when searching for project taxonomy resources to assist you in your knowledge management.
Let’s first explore some active search strategies. If you have ever conducted a keyword search on the open web for online taxonomy resources, you may have had some difficulty hitting your target. After simply typing the keyword word “taxonomy” or “taxonomies” or “thesaurus” or “thesauri” into your favorite search engine window, you may have obtained less than satisfactory results. How many of your results were even remotely related to information structures, knowledge organization, or contextualized concepts organized by term?
How can you get better search results in less time?
- Consider using operators and/or advanced search techniques
- Isolate exact search phrases for use in full-text searches
- Once you’ve found a good online resource, take one additional step to find similar results
Each of these three tactics is discussed below.
1. If your favorite search engine allows for operators, try enabling operators under “advanced search settings.” Operators may refer to common Boolean operators (AND, OR, NOT) or may contain different symbols for the same function as Boolean operators. Familiarize yourself with your particular engine’s operators and vernacular. In Google, for example, go to the “Settings” link located at the bottom right of your Google search page (open on the Explorer or Firefox browser). From the drop-down menu (or pop-up menu in this case), choose “Advanced search.” Scroll down to the entry at the left that reads “Use operators in the search box.”
Explore this page’s many options. A little time invested here typically yields large dividends in your future searches. You may also decide to use the advanced search boxes or choose to use the Boolean shortcuts like OR, – (NOT), AND. You may also truncate words or employ wildcards using the asterisk *. An additional descriptor, like “business,” “knowledge management” or “project” added to “taxonomy” will better identify your target.
The time you take to carefully construct your search query will help “prefilter” your results and increase their relevance. Your searching will begin to look more like this:
(KM taxonom* OR enterprise taxonom* OR business taxonom* OR project taxonom* or corporate taxonom*) AND (manag* OR software)
2. In order to trim down the number of results, try the strategy of isolating exact phrases for full-text searching with quotation marks (“”). Your search queries will begin to look more like this:
(“project taxonomy” OR “enterprise taxonomy” OR “corporate taxonomy”)
Although it is possible to conduct the same searches in the “advanced search” option of most search engines, why not “save time” by learning a few of these shortcuts and experimenting with them?
3. Once you’ve discovered a resource or webpage that contains relevant content, try utilizing a few related searches that will expand similar useful and relevant results. In Google, for instance, you can type: info: (followed immediately by the web address – the Uniform or Universal Resource Locator (URL). Here’s an example of what you might type into the search box window if you liked what you saw at www.taxonomystrategies.com.
After you receive your results, look to the bottom of the page for increased options such as these:
Consider another example. If you were pleased with what you found at www.taxobank.org, then try typing:
Type the following terms into your search box window, and note the differences and nuances of the results rendered: (NOTE: Leave no space(s) between letter(s) that border the colon. Cf. the example above.)
Info: (Cut and paste the URL from the relevant site here.)
Related: (Cut and paste the URL from the relevant site here.)
Link: (Cut and paste the URL from the relevant site here.)
You might also try typing the URL into www.similarsites.com. (Beware; many of the “results” here are interspersed with ads!)
In the next post in this series, we will consider additional active search strategies to assist you in using time wisely to ferret out resources for your taxonomy needs.
Eric Ziecker, Information Consultant
Access Innovations, Inc.
Once upon a time, there was a real art to finding something in a library, and the card catalog, in a way, was the medium. Those giant wooden cabinets were filled with mysteries to be uncovered, but the first mystery was how to navigate it. There were always the artists—the librarians—who could help you through it, but that is really only viable for a limited amount of content; librarians have other duties, after all.
For people with larger goals —authors, researchers, and the like—it could get complicated really fast. They’re never in the position of needing a single book or a single article; they need a mountain of them. If it’s a very narrow subject they were working in, it made things a little easier. But the broader the subject or the greater the number of narrow subjects, the more quickly it became clear just how much work it would be to successfully find everything they needed.
What’s more, it was virtually impossible to enrich the research with material they never knew existed, at least not without the direct help of a colleague or expert who could recommend new material to them.
It gets even more complicated when you start to consider expanding the search beyond specific titles into authors, publishers, or tangentially related subjects. Then you start to get into cross-references; those are complete sets of records in themselves. By now it’s an unmanageably huge amount of information to deal with, and librarians, magical though they may be, could only do so much.
The thing is that “once upon a time” really isn’t that long ago; advances in information sciences have turned that magic into something more accessible to everyone. Tagging documents with metadata to identify the author name, institutions, subject matter, or any relevant piece of information at all brings all of those card catalogs into a single databank, accessible all at once.
It opens up wide possibilities for content usage, but what about applying those same “tagging” principles to people? We like to call it Semantic Fingerprinting because, it turns out, tagging a person’s electronic record actually does reveal the uniqueness of the person.
In academic publishing, the benefit of this fingerprinting is pretty clear. Knowing the author’s name, date of birth, institution, or really anything you want allows him or her to be identified quickly and, more importantly, with accuracy. This is important for a couple of reasons.
On the author’s side, having proper credit for their work is of course important, and, with their name and, likely, their institution already tagged in their book or article, their identity is pointed straight at their tagged record, proving them the true author. Additionally, if the subject matter they’ve written about is tagged in their record, as well, a new article submission can be placed intelligently into the peer review process. If you write about nanotechnology, experts in the field can quickly be identified, and be sent the article for review, eliminating one of the many possible slowdowns in a tedious, but necessary process.
For the publisher, it’s just as important, as it makes categorization of various authors easier. With the subjects tagged, it becomes really easy to see in which journal the article belongs, but it also aids in sales and subscriptions, which are becoming more important to the whole process than ever.
Subscription prices are going up while institutional budgets are slashed, meaning that a university has to make some hard choices about which journals are most important to them. So for the publisher to be able to look at their author and institution identities is a big deal. If they get word that a university library is planning to cancel their subscription, they can match who from that institution published in the journal and suggest that maybe they reconsider, given that their faculty has published in the journal whatever number of times over the last ten years. It’s unfortunate to think of the bottom line all the time, but we’ve all got to keep the lights on.
Many of these same things apply for researchers, which gets back to the original problem of sifting through content in a library. When the document is tagged, the researcher can quickly identify all of an author’s published work, when it was published, and on what subjects. From those subjects, they can then see other authors who published on the same or related topics and, soon, you see a network of information starting to build that is massively useful to people all throughout the publishing process.
And while we talk about academic publishing a lot around these parts, the private sector can get just as much use out of Semantic Fingerprinting as the public. Suppose, as a random example, the manager of a corporate marketing department is trying to put together a team of people for a big campaign. The manager needs people with very specific skills that may or may not go along with their job descriptions. Let’s say that the manager had employees take a survey at some previous point, which suggested individual skill sets. What if, then, each individual had those skills tagged within their employee record? Rather than have to hunt or, worse, simply hope that the chosen employees can perform the duties, the manager could just look at those skill tags and pinpoint exactly who will do for what task.
I don’t know how many companies out there are doing stuff like that, but I can see so many possibilities in working with semantic fingerprints. I can imagine possibilities in just about any industry I can think of, and I’m sure there’s a mountain of uses that I haven’t fathomed yet. In connection with Linked Data, it could be almost endless.
As we announced yesterday, our own Margie Hlava was named winner of the 2014 ASIS&T Award of Merit. Because she will be in Seattle on November 4th to receive this award, Bob Kasenchak will be leading the Taxonomy Boot Camp workshop “So You Have a Taxonomy – Now What?” in her place.
Bob Kasenchak is Project Coordinator at Access Innovations and brings a wealth of knowledge and experience to the art of taxonomies. Bob has been working at Access Innovations for several years, rising through the ranks with his quick wit and experience. He has worked on and managed projects for clients such as JSTOR, the Education division of McGraw-Hill, AAAS (the American Association for the Advancement of Science), and ASCE (the American Society of Civil Engineers), among others. His experience with taxonomy and thesaurus development and his knowledge of editorial workflows and challenges make him a skilled coordinator. He shares his knowledge and expertise both with clients and with the editors of Access Innovations as he ensures that projects are running smoothly and efficiently. He presented a paper at the 2013 Taxonomy Boot Camp in Washington, D.C., and attended the 2013 Frankfurt Book Fair, meeting with many past and present Access Innovations clients.
Check out the workshop with Bob and seek him out to introduce yourself. He is a source of vast and varied knowledge. Plus we just learned the early bird pricing has been extended to October 10, 2014. So don’t wait!
Melody K. Smith
Sponsored by Data Harmony, a unit of Access Innovations, the world leader in indexing and making content findable.
“The Award of Merit is our society’s highest award,” explained Richard Hill, Executive Director of ASIS&T, “and Margie has definitely earned it through her achievements. She has created opportunities where none previously existed, thereby expanding the field itself. In addition, as a member of ASIS&T, she has contributed countless hours of volunteer service to the great benefit of the Society.”
“Marjorie Hlava has spent forty years demonstrating how published theories of information science work in large-scale environments. Information professionals, and in fact people not even aware they are part of the information industry, use things she has created without realizing it. She has a keen eye for identifying ways in which fundamental principles of knowledge organization can become useful in the less-than-perfect environment of everyday applications,” wrote Harry Bruce, ASIS&T president, in the meeting program. “She could easily have led an academic life; however, she chose a different, and in many ways more difficult, way of shaping information science. She created a company and set of products and solutions (standards, schemas, languages, databases, taxonomies) that both applied principles and drove research by demonstrating what worked and what needs to be done.
“Patents, a diversity of projects, and a spirit of entrepreneurship illustrated strengthened key linkages between associated fields. Her nomination packet includes five letters, all of which are from significant information scientists, demonstrating how Marjorie is an example of how ASIS&T is unique in supporting a special blend of applied and theoretical work.”
Ms. Hlava was interviewed in April of 2014 as part of the “Leaders of Information Science and Technology Worldwide: In Their Own Words” initiative sponsored by the ASIS&T under the guidance of the Special Interest Group, History and Foundations of Information Science (SIG/HFIS) and the 75th Anniversary Task Force of ASIS&T. A video of this interview is posted on the ASIS&T website and can be viewed here.
“I am surprised, delighted, and humbled by this honor,” commented Ms. Hlava. “I have always enjoyed my membership in ASIS&T and found the presentations to be a springboard for new ideas to try.”
Access Innovations CEO Jay Ven Eman observed, “The insights Margie has gained from attending the meetings and networking with other members have fueled her desire to undertake new (and sometimes daring!) developments with the company’s service offerings and, later, the software. Conversations with other members have helped her find creative ways to address the applications of information science and its challenges. We look forward to many more years of continued involvement in ASIS&T.”
According to the ASIS&T website, “The Award of Merit was established in 1964 and is administered by the Awards and Honors Committee. The purpose of the award is to recognize an individual deemed to have made noteworthy contributions to the field of information science. Such contributions may include the expression of new ideas, the creation of new devices, the development of better techniques, or substantial research efforts which have led to further development of thought or devices or applications, or outstanding service to the profession of information science, as evidenced by successful efforts in the educational, social, or political processes affecting the profession.
“The award is a once-in-a-lifetime award and is sponsored by the Society-at-Large and is administered by the Awards and Honors Committee. The award shall be announced and presented to the winner by the ASIS&T President, with appropriate ceremony, at the banquet of the annual meeting of the Society.”
The presentation of the Award of Merit and the society’s other awards is to be made by Harry Bruce, the current ASIS&T president, at the upcoming ASIS&T Annual Meeting in Seattle, Washington at the Awards Luncheon on Tuesday, November 4, 2014.
About Access Innovations, Inc.
www.accessinn.com, www.dataharmony.com, www.taxodiary.com
Founded in 1978, Access Innovations has extensive experience with Internet technology applications, master data management, database creation, thesaurus and taxonomy creation, and semantic integration. Access Innovations’ Data Harmony® software includes automatic indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet productions environment needs. Data Harmony is used by publishers, governments, and corporate clients throughout the world.
About ASIS&T – www.asis.org
Since 1937, the Association for Information Science and Technology (ASIS&T) has been the association for information professionals leading the search for new and better theories, techniques, and technologies to improve access to information. ASIS&T brings together diverse streams of knowledge, focusing what might be disparate approaches into novel solutions to common problems. ASIS&T bridges the gaps not only between disciplines, but also between the research that drives and the practices that sustain new developments. ASIS&T counts among its membership some 4,000 information specialists from such fields as computer science, linguistics, management, librarianship, engineering, law, medicine, chemistry, and education – individuals who share a common interest in improving the ways society stores, retrieves, analyzes, manages, archives and disseminates information, coming together for mutual benefit.
Nobody is going to deny that publishing is and always has been a sometimes messy process, but sophisticated uses of metadata and taxonomies can help clean it up. It fascinates me how intimately it can work in every step of the process to make it easier on everybody, from the author writing the piece to the institution that publishes it, all the way to its marketing and use.
Let’s start at the beginning, with the writer. Presumably, the person is an expert in his or her field, or at least working toward it, but that absolutely doesn’t make them an expert in searching for the information they need. That’s what always made library sciences so valuable, and while they’re still extremely valuable (don’t want to offend my librarian friends out there), the rise of enriched metadata means that searching and finding the content they need to conduct their research can be laid out clearly and concisely in front of them. This allows them to function in a noise-free environment and produce their best possible work.
So they’ve done all that and it’s time to submit the work to publishers. As we’ve seen, this can be an ordeal, but semantically enriched content, once again, can be implemented to ease the process for both the author and the publication. Tagged with relevant thesaurus terms, the submission can be analyzed to identify its subject, where it can then be more easily sorted and sent to properly qualified experts in the field for peer review. This might seem like a small part of it, but any amount of time saved is a big benefit to the author, who is often under the crushing weight of tenure deadlines.
However, once the author’s submission is out the door and in the hands of peer reviewers, it goes through its revision process, sent back and forth to get everything squared away. This, of course, can take a long time, but once the work is ready for publication, metadata begins to take on its most important role. Those same (or similar) subject terms that helped direct the submission into peer review now help to make certain that it is now directed to the most relevant possible journal, ensuring that the right people can easily find it.
This is the point at which, with the right tools and the right people in place, the metadata can really shine, because there’s so much that can be done with it. Once an article is published, either in an open access format like PLOS One or a more traditional subscription journal, its metadata can be used for an increasing number of purposes, anything from simple organization to highly advanced linked data.
Whatever that data is used for, the most important thing is that the content can be found. Everything after that is useless if it sits in the ether, hidden so nobody can read it. And as is likely fairly clear by now, the metadata is absolutely crucial at this end stage, where other researchers need to locate the content to conduct their own work. Just like original authors’ needs for clear, concise search results when their process started, if these new researchers have their results muddled with bad results and noise, let alone a result that get missed completely, it’s much more difficult to find the necessary content. This can prevent authors’ work from reaching the people who require it and keep it from furthering work in the field.
That’s counterproductive to research, obviously, but it’s also totally unnecessary. It shouldn’t take much to get people to see how this kind of metadata enrichment can make authors’ and publishers’ lives easier. It’s relatively new and there are a lot of buzzy words attached to it, but that doesn’t change the value of the core concept.
The good news is that semantically enriched metadata is starting to show up all over the place. Software like Data Harmony from Access Innovations automates much of this to help academic journals and institutions facilitate research. The pile of metadata is already gigantic, so it’s vital that the new content that journals are constantly publishing gets analyzed and tagged swiftly and accurately.
To me, the furthering of research is the most important thing, but there is another step in the process, that of marketing and sales. It’s the same principle as with everything else here: you can’t buy what you can’t find. The place with the clearest inroads to the content the consumer is looking for will be the one that wins. But the truth is that the sooner that people adopt the ideas behind semantically enriched metadata, the sooner it is that we all win.
Are you ready for boot camp? Have you all your tools and gear packed and ready to learn? The Taxonomy Boot Camp is scheduled for November 4-5, 2014 in Washington D.C. as a precursor to the Enterprise Search & Discovery 2014 conference that we told you about earlier this week.
The first day of Taxonomy Boot Camp features a track for those who are already well-versed in the fundamentals of taxonomy or who would like to learn how professionals have made their organizations more successful through better use of taxonomies. Leading the list of workshops is our own Marjorie Hlava with “So You Have a Taxonomy – Now What?”
Others include “Manual & Automatic Subject Tagging in PLOS” with Helen Atkins, Director, Publishing Services, Public Library of Science, and “Implementing a Taxonomy for the Common Core” with Raj Cary, VP of Technology/Architecture, Triumph Learning.
See all the options and register here. See you in D.C.!
Melody K. Smith
Sponsored by Access Innovations, the world leader in taxonomies, metadata, and semantic enrichment to make your content findable.
Access Innovations recently debuted Data Harmony Version 3.9. Within its new features and fixes is a sneakily clever module called Inline Tagging. On the surface, it does exactly what the name says: It allows the user to see in a piece of content, quickly and clearly, what concepts in the text, exactly where in the text, triggered subject tagging by the software. It seems simple enough, a handy tool, but upon closer inspection, it really opens doors for the user.
Once the text is tagged, it becomes a question of what the user wants to do with it. That’s where the possibilities start to get really intriguing. In part, it allows an editor to do some very helpful things internally. Once term indexing triggers are tagged in a document, the editor could, for instance, go to the terms’ thesaurus listing, where they can see broader and related terms, along with synonyms or any number of facets of the taxonomy.
Thus, Inline Tagging is a helpful tool in aiding the editing process, but my thoughts are moving more toward the end user right now. It’s they who can truly reap its benefits. That’s because Inline Tagging can easily serve as a conduit for linking data, which has the potential to dramatically enrich a user’s search experience; absolutely crucial, especially in publishing.
We’ve already seen how massive the amount of data in the world has become, and we’ve seen the need to understand and control it. We see the emergent patterns in that data, and we work with it to discover new avenues for viewership or revenue or education. But that’s using just a handful of datasets. No matter how large they might be, the size of that data pales in comparison to the data in the world. If we could harness that power, what could we do?
Linked data, which has emerged as one of the most important concepts in data publishing, could well be the answer. In a database, one that implements Inline Tagging, the key terms and concepts in the documents are located at their occurrences within those documents. By using Inline Tagging, you turn a passage of text into a data item that can be quickly plucked for analysis. But how does that help us?
It can work on a number of levels. This can be as simple as having a taxonomy term link to a definition page, with broader and narrower terms, synonyms, etc. That right there can help with clarity, speed, and accuracy, but that’s just the beginning. There could also be a more substantial relationship between a thesaurus and the world’s data, one that allows users to take those data items and send them out to mine the web for related tags, drawing them back to the original page as related materials.
Say somebody is starting to write a paper on how a cheetah raises its young. They go online to research it and find a paper that addresses the topic perfectly. Now, this website also happens to implement linked data, so when the user queried “cheetahs raising young,” not only did the search result in a strong match on the site, it also, in turn, queried the cloud of data in the web. On its own, it locates information on other sites on the same topic and pulls down additional links: a wiki page, other related articles and papers, videos, or really anything.
It’s well known that people love one-stop shopping. That’s true in retail and that’s true in publishing. If the researcher can get all that information, curated personally for them in a clear, concise, and most importantly, highly accurate manner, they’ll almost certainly make that site their primary resource.
Some of the concepts have already been implemented in places, notably the BBC, whose unique Sport Ontology created for the 2012 Olympic games revealed just some of the potential of linked data. The idea was to personalize how the viewer watched the Olympics, understanding that enriched, relevant information delivered to the viewer in real time will drive traffic to the site.
There are even bigger ways linked data is being used, or potentially being used. The European Union is funding a project called Digitised Manuscripts to Europeana (DM2E), which aims to link all of Europe’s memory institutions to Europeana, the EU’s largest cultural heritage portal, to give free access to the stores of European history.
What if, in theory, a medical organization had access to linked data during flu season? That organization could pull information from not only medical records, but from, say, community records, school data, and other sources to try to predict when and where outbreaks might occur to minimize the damage. Certainly, there are issues with privacy and other hurdles that would need to be addressed, but even though that example is theoretical, the potential is massive.
Of course, proper implementation of linked data takes plenty of cooperation, so the jury is still out on how much or how soon sophisticated linked data usage could come about. The possibilities for academia, cultural awareness, and even retail look too enticing for it not to flourish. I, for one, am looking forward to a day where information I never dreamed of is right at my fingertips. I don’t know what it’s going to be, but it should be a fun ride.
Access Innovations, Inc. has announced that the Data Harmony Metadata Extractor is available as an extension of MAIstro™, the flagship thesaurus and indexing application in the company’s Data Harmony software line. Metadata Extractor is a managed Web-based service for revealing the hidden structure in an organization’s content, through superior data mining of publication elements, to normalize and automate document metadata tagging for the benefit of the organization.
Data Harmony Version 3.9 software achieves user-friendly integration of a taxonomy (or thesaurus) with an existing content platform or publishing pipeline. Patented indexing algorithms generate terms that describe what documents are really about, and precise keywords are attached for retrieving those content objects later, under different conditions. Among other benefits, deploying Data Harmony for subject tagging throughout a document collection creates a better search experience for users, because the results they get are closer to the point – there’s less extraneous material.
Leveraging a patented approach to text analysis for better keyword tagging is only one of the advantages to be gained from implementing the new Metadata Extractor Web service.
Quality Metadata Is Essential for Effective Content Management
To enhance the quality of metadata, this Data Harmony extension generates a complete bibliographic citation, creates an auto-summarized abstract of an article’s content, handles author parsing, and assigns subject keywords automatically. Metadata Extractor takes an unstructured or semi-structured article as input and returns an XML document with richer, more descriptive information captured in the metadata elements.
The Metadata Extractor extension identifies descriptive information in a document, distilling and normalizing it in a method far more sophisticated than merely matching keywords in text. The extension attaches this enhanced metadata to boost long-term value of the content object. It’s been shown that high quality metadata, consistently applied, reduces a common source of user frustration: not finding the appropriate document at the right time, in an oversized, disorganized file system.
Publishers Stand to Gain From Implementation
“Metadata Extractor is an essential addition to the Data Harmony software lineup for scholarly publishers, especially,” said Marjorie M. K. Hlava, President of Access Innovations, when asked to comment on its release. “Since every publication style sheet requires a targeted approach to leverage the most appropriate fields, Access Innovations provides customization supporting each new implementation. The result is a highly specialized output of accurate, consistent metadata for client documents, with subject keywords applied from their own unique vocabulary.”
M.A.I.™ Sets This Metadata Tool Apart from the Rest
“The extraction process uses element-based semantic algorithms mediated by M.A.I., the Machine Aided Indexer,” said Bob Kasenchak, Access Innovations’ Production Manager. “It draws on a set of Data Harmony programs that harness natural language processing (NLP) for targeted text analysis. During configuration, elements in the document schema are specified for metadata extraction, to reflect the structure of input articles. Then, whenever someone processes an article with Metadata Extractor, M.A.I. algorithms go to work surfacing crucial pieces of information to identify that document, and that document only.”
The graphical user interfaces (GUIs) and input elements for the Metadata Extractor Web service are adjustable based on the nature of incoming data and user needs.
Data Harmony Extension Modules
Access Innovations offers an expanding selection of Web-based service extension modules that are opening up new avenues between content management platforms and the innovative Data Harmony core applications: Thesaurus Master® and M.A.I.™ (Machine Aided Indexer).
To supplement an organization’s publishing pipeline or document collection with great tools for knowledge discovery, the Data Harmony Web service extensions operate on the basis of rigorous taxonomy structures, creative data extraction methods, patented text analytics, and flexible implementation options. All Data Harmony software is designed for excellent cross-platform interoperability, offering convenient opportunities for integration in all kinds of computing environments and content management systems (CMSs).
Visit the Data Harmony Products page to explore the range of focused solutions that are presented by Data Harmony Version 3.9 extension modules.
About Access Innovations, Inc. –
Founded in 1978, Access Innovations has extensive experience with Internet technology applications, master data management, database creation, thesaurus/taxonomy creation, and semantic integration. Access Innovations’ Data Harmony software includes machine aided indexing, thesaurus management, an XML Intranet System (XIS), and metadata extraction for content creation developed to meet production environment needs. Data Harmony is used by publishers, governments, and corporate clients throughout the world.
Not that long ago, getting published was the big hurdle for a writer to overcome. You could produce all you wanted, but unless you knew how to get somebody to read your random submission, or you were rich enough to self-publish, your writing lived in a drawer, waiting for you to give it to a friend who doesn’t want to read it.
It’s hard to believe how fast technology has opened publishing up to people. Now, anyone with an opinion has a platform, and while it’s as tough as ever to make a living writing, the platform, in many cases, is totally free. So that changes the hurdle from publication to recognition. If everybody has a voice, how do you get heard?
This isn’t just a question of red-hot opinions on social media. The explosion of e-book publishing has enabled writers of all kinds and all backgrounds, and without a character restriction. Whether it’s through a blog, an e-book, or whatever, the gatekeeper has started to disappear, and to a writer who likes getting published, that prospect is thrilling.
But a new gatekeeper has replaced the old. The driving force of the explosion has been the Amazon Kindle. Since it was first issued in 2007, Kindle titles have taken an increasingly large share of the industry, and now make up nearly 20% of all book sales, not just e-books.
That’s astonishingly fast, and the publishing industry has been dragged kicking and screaming behind. It’s easy to see how it could be a painful transition for them. There’s no physical copy to print and they’re out of the distribution game, so publishers naturally make less per book sold than they had in the past. Amazon made deals advantageous to themselves, of course, but sales have continued to increase. The downside is that issues have arisen as a result of Amazon trying to strong-arm publishers who don’t want to play ball.
By the same token, writers make less in royalties than they once did, as well. That’s the sad part, I guess, but the positive side is that more people are writing and more ideas are floating around, which is a beautiful thing and vital to the advancement of culture. It also presents a brand new problem for the industry: information overload.
As long as there was traditional publishing, there was a structure in place to determine what writing was deemed “worthy” of printing. It kept dangerous or controversial views out of the public, sure, but it also filtered out the garbage. Academic publishing still has its review system in place to make sure a work is suitable to print, but the non-academic side now has little to no filter.
Let’s face it; for all the good that open access to publication can do for society, it also means that one may have to wade through a lot of it to find high-quality, relevant material. So the question becomes how to access it so that every time you want to find something, you don’t have to filter through a large amount of irrelevant and useless material. It’s for this reason that data management has become so vital. Its use has resulted in revolutionary new ways to look at publishing.
The basic fact of having an individual platform is big enough. But there are larger, more groundbreaking efforts to take advantage of the opportunities the technology has afforded us. Norway, for instance, is in the process of digitizing all of its books, all of them, to make them available online to anyone with a Norwegian IP address; the Digital Public Library of America is a growing resource connecting libraries across the country; and the Public Library of Science has turned the paradigm of academic publishing on its ear.
The concept of the digital library isn’t new. Project Gutenberg has been around since 1971. Little did we know back then what kind of value that might have. It’s only becoming clear now that analytic software has become so advanced. For Amazon, books were a means to mine customer data for other products. Now, that kind of data mining is commonplace. It doesn’t have to be about sales, though. In these library projects, that same level of data mining can be used for all sorts of purposes, from recommending new reading materials to a better understanding of a student’s learning habits.
The potential in these projects is limitless, and it takes innovative thinkers to look for patterns and derive ways to utilize them. But the most important thing to me is that what I write, what anybody writes, can be published and accessed for all to see in one form or another if somebody is interested. After all, if I want to read about new methods in cancer treatment or some crazy person ranting about aliens, I should have that right, and so should everyone.