Godby, Carol Jean
Overview
Works: | 46 works in 84 publications in 1 language and 602 library holdings |
---|---|
Genres: | Academic theses Surveys |
Roles: | Author, Researcher, Research team member, Research team head, Editor |
Classifications: | Z666.73.L56, 025.0427 |
Publication Timeline
.
Most widely held works by
Carol Jean Godby
Library Linked Data in the Cloud : OCLC's Experiments with New Models of Resource Description by
Carol Jean Godby(
)
21 editions published in 2015 in English and Undetermined and held by 322 WorldCat member libraries worldwide
This book describes OCLC's contributions to the transformation of the Internet from a web of documents to a Web of Data. The new Web is a growing "cloud" of interconnected resources that identify the things people want to know about when they approach the Internet with an information need. The linked data architecture has achieved critical mass just as it has become clear that library standards for resource description are nearing obsolescence. Working for the world's largest library cooperative, OCLC researchers have been active participants in the development of next-generation standards for library resource description. By engaging with an international community of library and Web standards experts, they have published some of the most widely used RDF datasets representing library collections and librarianship
21 editions published in 2015 in English and Undetermined and held by 322 WorldCat member libraries worldwide
This book describes OCLC's contributions to the transformation of the Internet from a web of documents to a Web of Data. The new Web is a growing "cloud" of interconnected resources that identify the things people want to know about when they approach the Internet with an information need. The linked data architecture has achieved critical mass just as it has become clear that library standards for resource description are nearing obsolescence. Working for the world's largest library cooperative, OCLC researchers have been active participants in the development of next-generation standards for library resource description. By engaging with an international community of library and Web standards experts, they have published some of the most widely used RDF datasets representing library collections and librarianship
Language files : materials for An introduction to language by
Ohio State University(
Book
)
11 editions published between 1982 and 1985 in English and held by 50 WorldCat member libraries worldwide
11 editions published between 1982 and 1985 in English and held by 50 WorldCat member libraries worldwide
Common ground : exploring compatibilities between the linked data models of the Library of Congress and OCLC by
Carol Jean Godby(
)
2 editions published in 2015 in English and held by 34 WorldCat member libraries worldwide
This document describes the progression of the BIBFRAME model developed by the Library of Congress and models based on Schema.org developed by OCLC, while also analyzing the alignment of the two models. Recommendations for closer alignment of the Library of Congress and OCLC linked data models are also addressed
2 editions published in 2015 in English and held by 34 WorldCat member libraries worldwide
This document describes the progression of the BIBFRAME model developed by the Library of Congress and models based on Schema.org developed by OCLC, while also analyzing the alignment of the two models. Recommendations for closer alignment of the Library of Congress and OCLC linked data models are also addressed
The relationship between BIBFRAME and OCLC's linked-data model of bibliographic description : a working paper by
Carol Jean Godby(
)
2 editions published in 2013 in English and held by 31 WorldCat member libraries worldwide
This document describes a proposed alignment between BIBFRAME and a model being explored by OCLC with extensions (SchemaBibEx) proposed by the Schema Bib Extend project, a W3C-sponsored community group tasked with adapting Schema.org to the description of library resources. The key result is that the two efforts are complementary except for some common vocabulary required for the most important entities (e.g., FRBR Group 1) and relationships
2 editions published in 2013 in English and held by 31 WorldCat member libraries worldwide
This document describes a proposed alignment between BIBFRAME and a model being explored by OCLC with extensions (SchemaBibEx) proposed by the Schema Bib Extend project, a W3C-sponsored community group tasked with adapting Schema.org to the description of library resources. The key result is that the two efforts are complementary except for some common vocabulary required for the most important entities (e.g., FRBR Group 1) and relationships
Mapping ONIX to MARC by
Carol Jean Godby(
)
1 edition published in 2010 in English and held by 30 WorldCat member libraries worldwide
1 edition published in 2010 in English and held by 30 WorldCat member libraries worldwide
Creating library linked data with Wikibase : lessons learned from Project Passage by
Carol Jean Godby(
)
1 edition published in 2019 in English and held by 28 WorldCat member libraries worldwide
"The OCLC Research linked data Wikibase prototype ('Project Passage') provided a sandbox in which librarians from 16 US institutions could experiment with creating linked data to describe resources--without requiring knowledge of the technical machinery of linked data. This report provides an overview of the context in which the prototype was developed, how the Wikibase platform was adapted for use by librarians, and eight use cases where pilot participants (co-authors of this report) describe their experience of creating metadata for resources in various formats and languages using the Wikibase editing interface. During the ten months of the pilot, the participants gained insight in both the potential of linked data in library cataloging workflows and the gaps that must be addressed before machine-readable semantic data can be fully adopted. Among the lessons learned: The building blocks of Wikibase can be used to create structured data with a precision that exceeds current library standards; The Wikibase platform enables user-driven ontology design but raises concerns about how to manage and maintain ontologies; The Wikibase platform, supplemented with OCLC's enhancements and stand-alone utilities, enables librarians to see the results of their effort in a discovery interface without leaving the metadata-creation workflow; Robust tools are required for local data management; To populate knowledge graphs with library metadata, tools that facilitate the import and enhancement of data created elsewhere are recommended; The pilot underscored the need for interoperability between data sources, both for ingest and export; The traditional distinction between authority and bibliographic data disappears in a Wikibase description
1 edition published in 2019 in English and held by 28 WorldCat member libraries worldwide
"The OCLC Research linked data Wikibase prototype ('Project Passage') provided a sandbox in which librarians from 16 US institutions could experiment with creating linked data to describe resources--without requiring knowledge of the technical machinery of linked data. This report provides an overview of the context in which the prototype was developed, how the Wikibase platform was adapted for use by librarians, and eight use cases where pilot participants (co-authors of this report) describe their experience of creating metadata for resources in various formats and languages using the Wikibase editing interface. During the ten months of the pilot, the participants gained insight in both the potential of linked data in library cataloging workflows and the gaps that must be addressed before machine-readable semantic data can be fully adopted. Among the lessons learned: The building blocks of Wikibase can be used to create structured data with a precision that exceeds current library standards; The Wikibase platform enables user-driven ontology design but raises concerns about how to manage and maintain ontologies; The Wikibase platform, supplemented with OCLC's enhancements and stand-alone utilities, enables librarians to see the results of their effort in a discovery interface without leaving the metadata-creation workflow; Robust tools are required for local data management; To populate knowledge graphs with library metadata, tools that facilitate the import and enhancement of data created elsewhere are recommended; The pilot underscored the need for interoperability between data sources, both for ingest and export; The traditional distinction between authority and bibliographic data disappears in a Wikibase description
A crosswalk from ONIX Version 3.0 for Books to MARC 21 by
Carol Jean Godby(
)
1 edition published in 2012 in English and held by 27 WorldCat member libraries worldwide
Describes OCLC's experience with mapping ONIX 3.0 for Books to MARC and updates the 2010 report Mapping ONIX to MARC which focused on ONIX 2.1. As in the earlier work [Godby, Carol Jean. Mapping ONIX to MARC. Dublin, Ohio : OCLC Research, c2010], the goal is to define and make public a specification for creating a production-grade bibliographic record appropriate for the library community from publisher-supplied metadata
1 edition published in 2012 in English and held by 27 WorldCat member libraries worldwide
Describes OCLC's experience with mapping ONIX 3.0 for Books to MARC and updates the 2010 report Mapping ONIX to MARC which focused on ONIX 2.1. As in the earlier work [Godby, Carol Jean. Mapping ONIX to MARC. Dublin, Ohio : OCLC Research, c2010], the goal is to define and make public a specification for creating a production-grade bibliographic record appropriate for the library community from publisher-supplied metadata
Social metadata for libraries, archives, and museums by
Karen Smith-Yoshimura(
)
1 edition published in 2011 in English and held by 26 WorldCat member libraries worldwide
Presents results of a survey conducted in October-November 2009 of managers of websites that support the creation of "social metadata," the content generated from certain "social media" features (e.g., tagging, comments, reviews, images, videos, ratings, recommendations, lists, links to related articles, etc.) that support the contribution of user-generated content. The sites that responded to the survey originate from academic libraries and archives, national libraries or archives, non-profit organizations not affiliated with any institution, museums, historical societies, consortia, other cultural institutions, public libraries, plus one botanical garden and one special library
1 edition published in 2011 in English and held by 26 WorldCat member libraries worldwide
Presents results of a survey conducted in October-November 2009 of managers of websites that support the creation of "social metadata," the content generated from certain "social media" features (e.g., tagging, comments, reviews, images, videos, ratings, recommendations, lists, links to related articles, etc.) that support the contribution of user-generated content. The sites that responded to the survey originate from academic libraries and archives, national libraries or archives, non-profit organizations not affiliated with any institution, museums, historical societies, consortia, other cultural institutions, public libraries, plus one botanical garden and one special library
A computational study of lexicalized noun phrases in English by
Carol Jean Godby(
Book
)
4 editions published in 2002 in English and held by 5 WorldCat member libraries worldwide
This study develops and evaluates a linguistically natural computational method for recognizing lexicalized noun phrases in a large corpus of English-language engineering text by synthesizing the insights of studies in traditional linguistics and computational linguists. From the scholarship in theoretical linguistics, the analysis adopts the perspective that lexicalized noun phrases represent the names of concepts that are important to a community of speakers and have survived a single context of use. Theoretical linguists have also proposed diagnostic tests for identifying lexicalized noun phrases, many of which can be formalized in a computational study. From the scholarship in computational linguistics, the analysis incorporates the view that a linguistic investigation can be extended and verified by processing relevant evidence from a corpus of text, which can be evaluated using mathematical models that do not require categorical input
4 editions published in 2002 in English and held by 5 WorldCat member libraries worldwide
This study develops and evaluates a linguistically natural computational method for recognizing lexicalized noun phrases in a large corpus of English-language engineering text by synthesizing the insights of studies in traditional linguistics and computational linguists. From the scholarship in theoretical linguistics, the analysis adopts the perspective that lexicalized noun phrases represent the names of concepts that are important to a community of speakers and have survived a single context of use. Theoretical linguists have also proposed diagnostic tests for identifying lexicalized noun phrases, many of which can be formalized in a computational study. From the scholarship in computational linguistics, the analysis incorporates the view that a linguistic investigation can be extended and verified by processing relevant evidence from a corpus of text, which can be evaluated using mathematical models that do not require categorical input
Two paths to interoperable metadata by
Carol Jean Godby(
)
1 edition published in 2003 in English and held by 3 WorldCat member libraries worldwide
This paper describes a prototype for a Web service that translates between pairs of metadata schemas. Despite a current trend toward encoding in XML and XSLT, we present arguments for a design that features a more distinct separation of syntax from semantics. The result is a system that automates routine processes, has a well-defined place for human input, and achieves a clean separation of the document data model, the document translations, and the machinery of the application
1 edition published in 2003 in English and held by 3 WorldCat member libraries worldwide
This paper describes a prototype for a Web service that translates between pairs of metadata schemas. Despite a current trend toward encoding in XML and XSLT, we present arguments for a design that features a more distinct separation of syntax from semantics. The result is a system that automates routine processes, has a well-defined place for human input, and achieves a clean separation of the document data model, the document translations, and the machinery of the application
Encoding application profiles in a computational model of the crosswalk by
Carol Jean Godby(
)
2 editions published in 2008 in English and held by 3 WorldCat member libraries worldwide
OCLC's Crosswalk Web Service (Godby, Smith and Childress, 2008) formalizes the notion of crosswalk, as defined in Gill, et al. (n.d.), by hiding technical details and permitting the semantic equivalences to emerge as the centerpiece. One outcome is that metadata experts, who are typically not programmers, can enter the translation logic into a spreadsheet that can be automatically converted into executable code. In this paper, we describe the implementation of the Dublin Core Terms application profile in the management of crosswalks involving MARC. A crosswalk that encodes an application profile extends the typical format with two columns: one that annotates the namespace to which an element belongs, and one that annotates a 'broadernarrower' relation between a pair of elements, such as Dublin Core coverage and Dublin Core Terms spatial. This information is sufficient to produce scripts written in OCLC's Semantic Equivalence Expression Language (or Seel), which are called from the Crosswalk Web Service to generate production-grade translations. With its focus on elements that can be mixed, matched, added, and redefined, the application profile (Heery and Patel, 2000) is a natural fit with the translation model of the Crosswalk Web Service, which attempts to achieve interoperability by mapping one pair of elements at a time
2 editions published in 2008 in English and held by 3 WorldCat member libraries worldwide
OCLC's Crosswalk Web Service (Godby, Smith and Childress, 2008) formalizes the notion of crosswalk, as defined in Gill, et al. (n.d.), by hiding technical details and permitting the semantic equivalences to emerge as the centerpiece. One outcome is that metadata experts, who are typically not programmers, can enter the translation logic into a spreadsheet that can be automatically converted into executable code. In this paper, we describe the implementation of the Dublin Core Terms application profile in the management of crosswalks involving MARC. A crosswalk that encodes an application profile extends the typical format with two columns: one that annotates the namespace to which an element belongs, and one that annotates a 'broadernarrower' relation between a pair of elements, such as Dublin Core coverage and Dublin Core Terms spatial. This information is sufficient to produce scripts written in OCLC's Semantic Equivalence Expression Language (or Seel), which are called from the Crosswalk Web Service to generate production-grade translations. With its focus on elements that can be mixed, matched, added, and redefined, the application profile (Heery and Patel, 2000) is a natural fit with the translation model of the Crosswalk Web Service, which attempts to achieve interoperability by mapping one pair of elements at a time
What do application profiles reveal about the learning object metadata standard? by
Carol Jean Godby(
)
1 edition published in 2004 in English and held by 3 WorldCat member libraries worldwide
1 edition published in 2004 in English and held by 3 WorldCat member libraries worldwide
Toward element-level interoperability in bibliographic metadata by
Carol Jean Godby(
)
1 edition published in 2008 in English and held by 2 WorldCat member libraries worldwide
This paper discusses an approach and set of tools for translating bibliographic metadata from one format to another. A computational model is proposed to formalize the notion of a 'crosswalk'. The translation process separates semantics from syntax, and specifies a crosswalk as machine executable translation files which are focused on assertions of element equivalence and are closely associated with the underlying intellectual analysis of metadata translation. A data model developed by the authors called Morfrom serves as an internal generic metadata format. Translation logic is written in an XML scripting language designed by the authors called the Semantic Equivalence Expression Language (Seel). These techniques have been built into an OCLC software toolkit to manage large and diverse collections of metadata records, called the Crosswalk Web Service
1 edition published in 2008 in English and held by 2 WorldCat member libraries worldwide
This paper discusses an approach and set of tools for translating bibliographic metadata from one format to another. A computational model is proposed to formalize the notion of a 'crosswalk'. The translation process separates semantics from syntax, and specifies a crosswalk as machine executable translation files which are focused on assertions of element equivalence and are closely associated with the underlying intellectual analysis of metadata translation. A data model developed by the authors called Morfrom serves as an internal generic metadata format. Translation logic is written in an XML scripting language designed by the authors called the Semantic Equivalence Expression Language (Seel). These techniques have been built into an OCLC software toolkit to manage large and diverse collections of metadata records, called the Crosswalk Web Service
Library classification schemes and access to electronic collections : enhancement of the Dewey Decimal Classification with
supplemental vocabulary by
Diane Vizine-Goetz(
)
1 edition published in 1997 in English and held by 2 WorldCat member libraries worldwide
A traditional library classification scheme, such as the Dewey Decimal Classification, requires ongoing improvements to the terminology of caption headings and to the currency, size and scope of its indexing vocabulary if it is to function effectively as a knowledge structuring tool for electronic collections. In this paper we describe two complementary research efforts to enhance the DDC with supplemental vocabulary. One project focuses on effecting links between the DDC and other subject access systems. The other project is concerned with associating current and end-user terminology from full text with the DDC. With these more versatile versions of the DDC, we are exploring the potential for automatically assigning classes to electronic documents
1 edition published in 1997 in English and held by 2 WorldCat member libraries worldwide
A traditional library classification scheme, such as the Dewey Decimal Classification, requires ongoing improvements to the terminology of caption headings and to the currency, size and scope of its indexing vocabulary if it is to function effectively as a knowledge structuring tool for electronic collections. In this paper we describe two complementary research efforts to enhance the DDC with supplemental vocabulary. One project focuses on effecting links between the DDC and other subject access systems. The other project is concerned with associating current and end-user terminology from full text with the DDC. With these more versatile versions of the DDC, we are exploring the potential for automatically assigning classes to electronic documents
Metadata switch : thinking about some metadata management and knowledge organization issues in the changing research and learning
landscape by
Lorcan Dempsey(
)
1 edition published in 2005 in English and held by 2 WorldCat member libraries worldwide
The academic library is not an end in itself. It supports research, learning and scholarship, and it must adapt as research and learning behaviors change in a network environment. The papers in this volume give a good sense of the challenges posed by such developments and the manifold library response. This paper briefly considers some of these issues, and takes them as its context, but quickly moves to a very specific emphasis. It considers how such library responses create new metadata management and knowledge organization questions, and it then outlines some of the work in OCLC Research which responds to these issues
1 edition published in 2005 in English and held by 2 WorldCat member libraries worldwide
The academic library is not an end in itself. It supports research, learning and scholarship, and it must adapt as research and learning behaviors change in a network environment. The papers in this volume give a good sense of the challenges posed by such developments and the manifold library response. This paper briefly considers some of these issues, and takes them as its context, but quickly moves to a very specific emphasis. It considers how such library responses create new metadata management and knowledge organization questions, and it then outlines some of the work in OCLC Research which responds to these issues
A repository of metadata crosswalks by
Carol Jean Godby(
)
1 edition published in 2004 in English and held by 2 WorldCat member libraries worldwide
This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community
1 edition published in 2004 in English and held by 2 WorldCat member libraries worldwide
This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community
A metalanguage for describing Internet resources by
Carol Jean Godby(
)
2 editions published in 1996 in English and held by 2 WorldCat member libraries worldwide
This paper describes a tool that can be used to create customized descriptions of Internet resources. When these descriptions adhere to established standards such as MARC, TEI, and FGDC, they can be collected into databases and searched with a common, easy-to-use interface using the OCLC Spectrum System. The Dublin Core Element Set ("Dublin Core") is used to implement semantic interoperability
2 editions published in 1996 in English and held by 2 WorldCat member libraries worldwide
This paper describes a tool that can be used to create customized descriptions of Internet resources. When these descriptions adhere to established standards such as MARC, TEI, and FGDC, they can be collected into databases and searched with a common, easy-to-use interface using the OCLC Spectrum System. The Dublin Core Element Set ("Dublin Core") is used to implement semantic interoperability
International Conference on Dublin Core and Metadata Applications by
Carol Jean Godby(
)
1 edition published in 2008 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2008 in English and held by 2 WorldCat member libraries worldwide
The WordSmith toolkit by
Carol Jean Godby(
)
1 edition published in 1998 in English and held by 2 WorldCat member libraries worldwide
The WordSmith toolkit is an integrated set of tools written in Java that allows the user to extract words and phrases from full-text documents. It is used at OCLC to support research that should lead to improved indexes for full-text documents and better methods for developing and maintaining authority files and classification systems
1 edition published in 1998 in English and held by 2 WorldCat member libraries worldwide
The WordSmith toolkit is an integrated set of tools written in Java that allows the user to extract words and phrases from full-text documents. It is used at OCLC to support research that should lead to improved indexes for full-text documents and better methods for developing and maintaining authority files and classification systems
An architecture for scholarly publishing on the World Wide Web(
Book
)
2 editions published between 1994 and 1995 in English and held by 2 WorldCat member libraries worldwide
2 editions published between 1994 and 1995 in English and held by 2 WorldCat member libraries worldwide
more

fewer

Audience Level
0 |
![]() |
1 | ||
General | Special |

- OCLC Research Publisher
- Mixter, Jeffrey Other
- Wang, Shenghui (Computer scientist) Other Author
- OCLC
- Smith-Yoshimura, Karen Research team member
- Wallace, Rex
- Jolley, Catherine
- Ohio State University Department of Linguistics
- Library of Congress
- Denenberg, Ray
Useful Links
Associated Subjects
Academic libraries Archives Automatic indexing BIBFRAME (Conceptual model) Cataloging of computer network resources Cloud computing Computational linguistics Engineering--Language English language--Noun phrase English language--Study and teaching Exchange of bibliographic information Information organization Internet searching Language and languages Language and languages--Study and teaching Language and languages--Study and teaching (Higher) Libraries Libraries and the Internet Linguistics Linguistics--Methodology Linguistics--Study and teaching Linked data Machine-readable bibliographic data MARC formats Metadata Metadata crosswalks Metadata--Standards Museums Natural language processing (Computer science) OCLC OCLC.--Office of Research ONIX format RDF (Document markup language) Scholarly electronic publishing Semantic Web Social media Subject cataloging User-generated content World Wide Web
Covers
Alternative Names
Carol Jean Godby American linguist
Carol Jean Godby lingüista estadounidense
Carol Jean Godby linguista norte-americana
Carol Jean Godby linguista statunitense
Carol Jean Godby llingüista estauxunidense
Godby, Jean
Languages