Language

Language
METADATA

Thursday, August 14, 2014

University of Oklahoma

Check out @tamarameans's Tweet: https://twitter.com/tamarameans/status/500128164945735680

Thursday, May 1, 2014

Goodbye

I will miss you all especially David Grosh great group member ...work with him guys really hard worker.  Kristen you worked like the devil I appreciate your commitment to the cause.  Rob it was great working with you this semester...To all I will miss two days until I cross that stage for the temporary unemployment line...I will stay active using PLN to keep up on your lives.  Stay in touch...tamara.means@gmail.com.

Indexing!!!

I am so sick of back tracking trying to get all the relation element together...I am sorry Amy for not responding but I thought you could figure out the link tool...This has been a hair raising experience.  I actually might like a job doing this ...watch out Dr MacCall might start getting call for reference

Sunday, April 6, 2014

Digital learning object repositories by H. Frank Cervone



I told you this article was coming from SCORM readings...This type of repository is very different in the problems that confront development..

So you have a repository with re-usable learning objects...love it...from an instructional p.o.v.

Article argues that all across the board including, IT staff, instructional designers and many others to be successful .

Many skillsets are needed that usually do not reside in traditional digital repositories  i.e. simple versioning of objects will not work...other issues include object creations must meet multiple context  minimizing reinvention of duplication but, customization is a requirement.

Organization is another issue that requires intuitive discovery approach such as, keywords, subject and many other format considerations.

Free from copyright and governed by Creative Commons license..(I have blogged about this)

Many issues confronting this type of repository have plagued libraries for many years..long term storage and use of learning objects are unique and lend to slower development of solutions.  The momentum needed is difficult to find in smaller institutions but, with larger organizations implementing inter-institutional repositories could be best approach.  I hope this article gives you some idea of the scope of the problem. 
  

The One Minute SCORM Overview for Anyone


I was so looking forward to reading the two articles on SCORM...currently I am participating in a instructional assistantship...which requires us to create re-usable digital learning content....so on to the article

I knew very little about the concept..SCORM

What is SCORM?

Shareable Content Object Reference Model..just a way to build training content so others can use it....yeah..versions of SCORM governs 2 things "packaging content" & "exchanging data at run-time."

Why use it?

Basically, widely adopted period...works well at huge organizations.

What's a SCO?
Shareable Content Object is the smallest piece of grain as it relates to content that is "reusable and independent"...shows some separation from other items.

What Version?
Any ..guess what? interoperable system..yeah...

Are you Content or LMS vendor?
Depends..if you play content you are a content ...if you you import someone else's you are LMS...very flexible for users.

What SCORM is not.
Just online training and only btw single user and system..no offline training or group training.

That's It...led to reading an article on Digital learning object repositories.  that will be my next blog...check it out.


H. Frank Cervone, (2012) "Digital learning object repositories", OCLC Systems & Services, Vol. 28 Is: 1, pp.14 - 16


H. Frank Cervone, (2012) "Digital learning object repositories", OCLC Systems & Services, Vol. 28 Iss: 1, pp.14 - 16 - See more at: http://www.emeraldinsight.com.libdata.lib.ua.edu/journals.htm?issn=1065-075x&volume=28&issue=1&articleid=17014948&show=html#sthash.7tZiteP1.dpuf
H. Frank Cervone, (2012) "Digital learning object repositories", OCLC Systems & Services, Vol. 28 Iss: 1, pp.14 - 16 - See more at: http://www.emeraldinsight.com.libdata.lib.ua.edu/journals.htm?issn=1065-075x&volume=28&issue=1&articleid=17014948&show=html#sthash.7tZiteP1.dpuf

Thursday, March 27, 2014

Welcome Europeana Professional



Welcome!!!

I have to admit I was interested from the get go...I have spent several years in the UK station with the military...so hats off to our fore fathers....Multi-lanuage site wahoo....

Who we are---search portal and API (application Programming Interface) and Linked open data (we have talked about this in class).  Network of experts in a open forum consisting of members from various technical, legal and strategic wits.

What we do---simply put aggregate, facilitate, distribute, and engage

Also, other information provided on their website includes website for projects, portal search engine and some other interesting collections and community contributed content.



I think this site is worth exploring...check it out!!!

DPLA Policy on Metadata Part II

DPLA


DPLA has undertaken the task of developing a policy  to make available to all free databases of metadata.  They have a commitment to ensure unencumbered access within the copyright law.

The policy also explains their definition of content, metadata and preview...so check that out 

So here we go 

  1. Majority metadata not subject to copyright restriction.They believe majority of metadata not subject to copyrights.
  2. DPLA partners share this commitment and vision.  Plus, have partners agree to dedicate metadata to public domain.
  3. No rights over anybody and give it to public domain
  4. Free I said free!!!!! and unencumbered (luv that word)..harvest/collect/modify and use and reuse.

So that's the basic concept!!! 

DPLA Best Practices Part I

  We all need a best practice so let's look briefly at DPLA with good reason.



Metadata made available is done under Creative Commons Zero (CCO) Public Domain Dedication 

Which means you can use the metadata

So here are the Guidelines

  1. Give credit --others need to be able to reuse data---remember interoperability
  2. Data can and does change-not static,keep a hyper-link to know updates
  3. Don't mislead or misrepresent-need I say more
  4. At your own risk-"as is" metadata offered and conform to local laws
So that is it in a nut shell...check the policy for more specific information ....

Wednesday, March 26, 2014

11 Things to Know About Semantic Web--ReadWrite

What you need to  know and probably already do..maybe...


  1. no apology necessary for calling it Web 3.0 ..of course web doesn't upgrade like that..but there are phases..think co-lab and databases.
  2. Allows structure on the "fly" adding as needed but this will take some time for this transition because the entrenchment of some tools within the structured system and semantic web will start slow decline of relational database tech.
  3. Consulting fees for those with big headaches..with things like RDF or tuples.
  4. Success will be judged under different categories..inherently intergrated.
  5. No need for app to apply all about platforms and servers or enterprises.
  6. Apps will be consumer or enterprise driven
  7. Google may get a run for their money or at least slow down "stream roller" effect ..structure less important as underlying content will be structured.
  8. Don't know how it is going to look and just know what it won't look like.
  9. Pragmatic will be replaced by semantic platforms
  10. Tagging we know but micro-formats will provide the structure we need
  11. leverage of the community will happen under the semantic web..techniques from social networking will be used..we know its coming..just provide structure!!!

Flickr Image Tagging: Patterns Made Visible by Joan Beaudoin

Folksonomies!

We all tag so let's agree probably difficult find a pattern but our author has a plan...
 Two interconnected ideas in play
First look for underlying patterns or similarities..next  effectiveness of image tagging..which means a tag could cross intersections  to be anything...sounds very difficult when one thing fits into many.

Model evaluation
The evaluate and develop categories...ok can't agree on that ...could get a majority but not all to agree..but those open to interpretation caused problems..modification of categories of tags would improve the model's performance  ..user to choose best fit

Tag categories--how often used
Proofed preferences in tag categories used...yes, i would agree place name should be #1...that's how I usually id my own images no surprise..use of compound categories when the user decides a single tag will not serve their purpose.  Usage on some tag categories was nominal ..some showed significant usage such as, event tag...at a much lower usage were tags humor, poetic, number and emotion..

Problem of meaning
 We as librarians know this issue well..."in other cases the unknown tags illustrate how important contextual knowledge is to the categorization of the tags"..in addition foreign languages pose difficulty too.

Our individual differences
Various tags experienced highly individualized use.  Use of time tag  showed most variation among users that were studied..but since most image have a date stamp this might be redundant.

Flickr developments
  each improvement offers users a more vibrate tagging process...opportunity to strengthen natural language and greater accuracy in retrieval.

US
 Progress in image retrieval
clarify how personalized can be
tagging is here to stay so get with it...

Thursday, March 6, 2014

Practical principles for Metadata Creation and Maintenance by J. Paul Getty Trust



Really practical and related well to classroom discussion as we move forward in our own creation fantastic journey of discovery!!!

J. Paul Getty Trust has some experiences in metadata creation...so I trust the source not the hip-hop magazine the source haha!!!

10 rule lesson

  • is a core activity of memory institutions
  • incremental and share the responsibility (kinda sounds like what are groups are doing?)
  • rules and processes must be followed
  • staff levels and skill sets must be present for a metadata strategy and successful implementation
  • share reliable info among relevant units in institution remember interoperability
  • no such thing as one size fits all for schema, vocabulary or data content standards.
  • streamline and replace manual metadata creation think "industrial production"
  • Routine part of workflow(creation of shareable, repurposable)
  • Integral part of institution's metadata workflow with research and documentation
  • High level of understanding and importance or value buy-in from upper management.
All this point seem to be common sense...but remember there are break downs in organizations such as, sacred cows and lack of vision.

May I speak Openly about mass digitization?



The first statement I agree with is that"it's the responsibility of non-profit, cultural heritage institutions to find ways to bridge that gap and work with the corporate world toward a public good." Mass digitization projects need that corporate mindset...and the results have been good for both parties and we are moving towards some great outcomes.

Public/private collaboration have worked for many other facets of our society so why not mass digitization.  Now that many agreements are public there is no mystery in the process.  As more libraries form partnership we should become more familiar how these partners work and will we as a library benefit from the relationship.

US National Archives has a plan for acting responsible within their  mass digitization project ensure the public's confidence in the agency.

But, how can we best work with these partners?  Trust them for what they do best and remember we as libraries have expertise too.  Remember that the end goal of these collaboration is open accessibility for  patrons, researchers and anybody else who wants access.

PS....Anybody interested in mass digitization check out Jan/Feb 2014 Archival Outlook pg.8 Large-Scale Digitization: Developing the Los Angeles Aqueduct Digital Platform .

Monday, February 17, 2014

The Ecology of Longevity

Survival of the fittest as it applies to non-biological phenomena.  Digital longevity using Darwin's theory as the framework.  The "fittest" as it applies to digital objects is determined by info pro's who actively or passively in decided the so call "lifespan" by selections making the life or death of object.

The digital objects are understood using the language of computers who "translates" for lack of a better terminology into visual or audio formats for human consumption. This computer and software which characteristic will version 1.0 pass to 2.0. or properties will survive or data survive. We as human have traits just as the digital hardware and software such as, MS Office interconnected of their various products on this platform and how some "traits" from MS Office 2007 are inherited into MS Office 2013.  What does this mechanical process imitation of a biological phenomena mean for digital hand me downs? View the digital formats as a living organism that evolves but requires us to create preservation tools that outlive and become evolutionary in application.

As we look at our system think about open source is represented and how this process has basically evolved on various paths by creator, users and how characteristics are directly inherited. Also, many technically advancement mimic organic evolution by (1)creating different designs and (2) existing designed are optimized for competition.

This article posit that an evolution process exist within technology and the strongest applications survive.  The ones who make it determine the future just like the victor writes history and gets the spoils which are "econosphere" of consumers and longevity among  formats.

Wednesday, February 12, 2014

What determines quality of search results?


Ok...so define quality...got you very difficult and that's how this story starts...and what really does influences the quality.  What comparisons can be made among search engines? the author of this blog discusses factors or influence that affect quality.
  • Quality of relevance ranking from the underlying search engine-
    • replicating bad input basically
    • what sources are searched causes difficulty for compare model
    • problem may exist in publisher's search engine not federated search product
  • The number of results retrieved from the underlying search engine by the federated search engine
    • More sources the better when measured against relevancy
    • Two ways for more search results
      • ask for more results than default
      • come back to the source after result
  • Quality of federated search engine connectors (connectors search the database)
    • Quality of connector for delivery of relevant
    • Quality confirmed by doing multiple searches against the federated search engine on one source then same search at publisher's site
    • Smart connector=better results
  • Quality of ranking of the federated search engine
    • Use of ranking of underlying sources presents two problems
      • many sources rank poorly
      • federated engines need to merge relevance across multiple sources
    • Let's not forget algorithm used to possibly judge quality
  • Results organization and presentation
    • Let's be honest needs to look good and be organized=more time spent with search product
Finally, I hope this gives you ways to question venders and the products available.

Friday, February 7, 2014

Metadata Analytics:Scene-level television metadata:Tagging TV - Is the new oil in the industry by Richard Kastelein

  This article provides an example of Electronic Programme Guides [EPG] i.e. TV metadata which is snippets of information and images. Use of Gracenotes that can be used for descriptive info, images and multimedia on one show as a whole.
We have all watched a show and been given the option to see more which is done by TV tagging. Example on CNN they ask a viewer if they want interactive TV. We all know what tagging means basically.  The TV metadata is a tagging process, but is bi-directional and provides IP metrics.  But, some value has been identified thus the process is not easy and has no common standards ...again!!!Common Standards. 
The article does set a pathway to working out an solution. First automation of tagging because we would need too many geeks to do it manually. This part has some headway with Speech to Text technology additionally use of Closed Caption when available. But, the best way as suggested is meshing of both ways "moderated/curated by humans."  Some attempts at XML standards for EPG metadata have started to make headway..Common Standards!!!!...Many of the standards are built on some significant predecessors from mostly the European digitalisation of TV. Soon the finalization of DVB-SI.  Second is MPEG-7 which provides a broader ability to provide tools for description of all types of multimedia content from a broadest range of networks and terminals.
The funding of projects to demonstrate how Semantic Web tech can connect TV content and Web convergence with a focus on BMF 2.0(Broadcast Metadata Exchange Format). Basically this format allows metadata interoperability  within their platform of noTube which is pushing that standard.  We will have to wait and see where this tension of using a pushed standard through a platform will effect European tv and internet connectivity.

Saturday, February 1, 2014

Bibliographer identifiers:OpenIDs,researchers and delegation

The article continues the discussion of identifiers and moves forward with acknowledging the possibility of benefiting the scholarly communication channel. Previous post of my blog was about persistent ids.  Same problem, unified agreed upon language for lack of better terminology on my part. Stated "We need a single, unique way of identifying researchers."

OpenID provides that ideal of "persistent" that would be a plus to the context of scholarly process.  The process of delegation of the technical portion is a difficult assessment of lesser of two evils of organizations' longevity  versus control i.e. trust.The suggestion of domain control sounds viable for guaranteed futurama.  The delegation piece is very important from my standpoint and the ability to maintain control for future changes. These ongoing conversations expose the tensions that exist among info pros.

Wednesday, January 22, 2014

Look at our language

http://www.wordle.net/show/wrdl/7479336/Metadata



More famous than Simon Cowell on Blipfoto



Think about "persistent identifiers" which means that things go by agreed id's just to keep it simple questions is how long?  But, long term persistent cannot be promised because programming languages will change in the future or a system to make the old language work which could harm current languages used.

Different type of problem ..huh..URL's and domain names versus time almost makes them human....this idea of "persistent"  is a issue for us info pro's in academic world for located dissertations or research papers in the long term.  We need to develop the social construct that people maintain which in turn will create this "persistent" identifiers that can live forever as far as we know.
Yes in the future I would like to electronically access my deceased grandmother thesis, won't you. 


  As the famous band Queen said, "Who wants to live forever."

Defining "Born Digital" by OCLC research (Ricky Erway)



Well the definition provided by the author does hold water as stated, "born-digital resources are items created and managed in digital form."

The author's selection of types included,
  • photos
  • documents
  • web content
  • manuscripts
  • electronic records
  • static data sets
  • dynamic data
  • art
  • media publications
Each type was provided with a suggested medium and as an experienced hoarder I  was intrigued that in some aspect digital document organization at home is going really well, but like all consumers/creators not maintaining a consistent file-naming convention.  So, basically I create "folksonomies" metadata for my use only. I can imagine caretakers of large "born Digital" collections deal with create one-trick pony solutions on a case by case situations.

Many difficulties face information professionals in creating, maintaining and accessing digital content as well as, avoiding outdated medium, poor use of funds allocated to resources development, and integrity of formats.

Suggestion for the way forward are common sense to most of who understand the variety of issues facing metadata in a total digital world.  The establishment of standards, tool and procedures is the tedious work  but is the opportunity to show the value of information organization and create value for end-users of born-digital collections.
  

Thursday, January 16, 2014

Thinking About The Catalog By Lorcan Dempsey



We all have seen his name Lorcan Dempsey has his fingers on the pulse of Information Professional issues,

Lorcan is reviewing the UC report on bibliographic infrastructure and how they should be constructed and maintained.  Also, how NCSU has implemented suggestion from UC report.

some things he took from the report include:

  • Services- ranked in meaningful way
  • Bibliographic structure-schema appropriateness
  • At the point of need-be where the user is
  • Discovery-pull of bibliographic universe
  • Technical processing-cost reduction of bibliographic mgmt
  • Platform and organization-what to build a unified platform on
  • Value-knowledge of inefficiencies
We have not redeem the full value of bibliographic investment intricate to library mission. The role of the catalog in relation to wider bibliographic database structures is an consideration when thinking about the catalog.  Innovation in bibliographic development can only improve access to users and structural improvements in catalogs such as, erasing boundaries btw databases.  OCLC is the authoritative  expert who can lead the way for many points discussed in the UC report.

Metadata- Pathways to Digital Information (Intro)




The introduction written by Murtha Baca discusses issues that have plagued metadata since the term was coined.  As stated in the first paragraph the digital world is in constant flux and change.  The conversation still is looking for an"authoritative digital resource," which means different things to majority of Information professionals.

As stated metadata should be an collaborative process for the issue of rights metadata.  The data created by users is a problem that lacks the structure that information professional trained in creation have.  The authors' have an understanding of user created content importance in the ongoing conversation of metadata.

Additional chapters cover topics such as, the "Hidden Web," and the legal ramification of open access. The barriers created to prevent access to many digital materials.  Which touches the surface of what Union catalogs mean if a layman doesn't know how to access that specific web page.

The ongoing message within the introduction is a lasting commitment to creation and continuous updating of various types of metadata as it relates to a variety of collections that are digitalized.  Mantra of slowing down the process to be more committed to concise creation. Understanding that the methods used to create metadata must meet procedures, protocols and data standards established and followed by all who create.

I agree with the author that creation of metadata is an investment in knowledge management which can lead to a large payoff for consistent effort and enhance end-user ability to access which is why we create digital resources.

Monday, January 13, 2014

News from the Library of Congress

http://1.usa.gov/1hPYg7l

This was tweeted by a classmate so kudos or good looking out.

ACRL updating cataloging guidelines for description of pictures, sounds like great news for all.
The new guideline cover still images of all varieties and providing wording for explanatory notes.  Which I hope will unify some vocabulary since there is more than one way to describe something.
The guidelines are born from a the very creditable ACRL Bibliographic Standards Committee on Rare Books and Manuscript section.  Which borrowed much from the DCRM (Descriptive Cataloging of Rare Materials) which is used for graphic items. DCRM(g) is one of a family of manuals who specialize in cataloging rules for various formats.

The new guidelines make ease of use for others in the field such as, archives and/or museums and many other organization who may benefit from use of a descriptive catalog tool box.  Questions can be submitted to their user group.

What is metadata? A Xmas themed exploration.



I enjoyed  Bonnie Swoger's article way to simplified the terms and really explained the concept in laymen language with use of Christmas.  The Xmas analogy is a process everyone gets.  Rules are rules just as "simply a structured description of something else" is a simple definition that I was able to explain to my thirteen year old daughter and she got it. Can it really be that easy to define?  My daughter thought so after reading the article and applied it to other events.  So, does that mean she is smarter?  No, just closer to the ground of technical shifts. 

The reasoning for each element of data file and how we sub consciously create set of metadata in every day life as the example in the article use of a Christmas picture.  I do it every time I view my family photos.  I recreate the event with all the data points such as, where, when, who some items missing from the process may not include what camera used or format.  But in some shape, form or fashion we are creating metadata in our own world. The problem is that we create for selfish use for our eyes only which really doesn't make it easier for my mom to understand the same photos. So I guess I am in agreement that even in our personal lives we need "sound data management practices" or grandma will never find that cute baby picture or even if she does won't be able to tell where it was taken.

I remember this old slogan of my mother's a place for everything and everything has its place.  organization from the eyes of a child with a junky room wanting to find the doll she played with last week.

Saturday, January 11, 2014

First class meeting blues



Well the semester has started with a bang bang...metadata...I am so looking forward to filling in the blanks left by LS560 in other words Digital libraries....used greenstone was a great platform for building a digital collection on football player...you know Dr. Albertson is a fan ...I have never had any followers on Twitter..makes me feel that i need to develop some interesting thoughts on the many readings..Please response to my rants...chat later..see you on the flipside...TAM