Andrew Bevan, Ben Soares, Morag Macgregor and John MacColl (EDINA)
To begin, Andrew Bevan, gave an introduction, "Getting more value from your subscriptions: cross-searching and OpenURL links", which included examples of cross-searching from EDINA GetRef, UEA's MetaLib implementation, the University of Luton's MetaFind service, and the University of Edinburgh's Collections Gateway which uses the Endeavour Encompass software.
An Edinburgh University attendee mentioned that their Collection Gateway is still in the development stage.
Nicholas Lewis's UKSG handout "Do we need a library portal / linkage software" was discussed, as was Yvonne Hamblin and Ruth Stubbings case study "The Implementation of MetaLib and SFX at Loughborough University". It was noted that the implementation of the library portal at Loughborough had taken 167 days in total.
There was discussion of how GetRef provides a "shallow" search across databases as opposed to the more complex searches possible in the individual databases' native interface. It was pointed out that cross-searching eliminates the individuality of each database; for example, there are value-added fields in the native interface, but a cross-search cannot focus in on these specialist metadata. It was added that a basic search on the BIOSIS native interface and in GetRef produced the same result set. An example was provided of a search on Compendex which had given a different result to the same search in GetRef (10 as opposed to 5000 hits). It was explained that a default year limit on the Compendex native interface might be a factor in this discrepancy.
It was remarked that thesaurus and other functions are not included in a cross-search, and that a complex search should be done in the native interface. One attendee asked if MetaLib and other cross-searching services provide the same functionality as the native interface, or if they also provide shallow cross-searching. The response was that there is a trade-off between functionality and performance in cross-searching, and that an uninformed user could cross-search a large number of targets resulting in slow searches, or even causing the service to crash, thus discrediting the service in that user's mind. It was added that cross-searching is lowest common denominator searching.
A suggestion was made that the core fields of Z39.50 only allow basic searches such as title, author, and subject, but it was clarified that Z39.50 can do more complex searches, depending on how indexes are mapped in the configuration of a given portal. This led to a discussion of how databases may have controlled indexes, but the system administrator must be wary of mapping general searching to precise indexes.
The discussion turned to how users were guided to suitable resources from the wide choice offered by their University. Encompass at Edinburgh University uses subject area searching, based on the University's colleges. There was concern that this could lead to "top 10"ing, but it was clarified that the user can choose to limit their search in any way they wish, and the library can put one resource in several subject areas. It was commented that user education will become more important (ironically, as you would expect a portal to provide a simpler interface) so the users can find exactly what they want, in particular, the library will have to carefully describe what each resource contains. The mix of types of resource also caused concern, and although UEA's MetaLib installation has a key of resource types, it was thought that users rarely paid attention to such details on an interface. Luton's method of grouping resource types in columns was praised. The question was raised as to whether the user was concerned if the resource was a catalogue or an A&I database. It was suggested that users only separate resource types into full text and everything else.
An attendee from Stirling explained how they use GetRef as an alternative to MetaFind, which is a system-dependent product. It is possible to use LinkFinderPlus with GetRef, but not with MetaFind. Stirling use the Innovative product WebBridge rather than GetCopy. GetRef and GetCopy are two separate modules that can run together, but GetRef also works with other resolvers.
Advanced searching functionality was discussed. It was asked whether any work had been done on the GetRef advanced search and if advanced field searching was under consideration. The reply was that resources in GetRef will be grouped by subject and, as in the case of the Collections Gateway, a database can be included in more than one set. Each subscribing library will be asked to limit which resources are searched. The advanced field functionality has been completed but not added to the interface yet. There was discussion over whether to include Author and Title searches within the advanced search, and if databases can be pre-selected. The reply was that, at present, all databases are searched by default, but GetRef will ultimately contain multiple levels of granularity in the choice of databases. The Collection Gateway's subjects expand to resource level, and it was confirmed that the project team chooses which resources go into each subject area, in consultation with the colleges of the University.
The difficulties caused by unreliable HTTP resources were discussed. Many of the Collection Gateway's resources are of this type, and the software provider, Endeavour, is working to resolve these problems. It was mentioned that XGrain (GetRef's previous name) used to have good screen-scraping of Web of Science, but the addition of JavaScript to the Web of Science interface now makes screen-scraping more difficult, though not impossible. It was thought that the use of JavaScript in such interfaces was sometimes unnecessary. It was confirmed that Encompass (the Collection Gateway software) will screen-scrape at field level. In the past, Endeavour predicted that most web resources would provide XML gateways, but this has not proved to be the case, perhaps because service providers want users to come through the service's native interface. It was added that Web of Knowledge also poses a problem for HTML screen-scraping, and it is a policy that there is no machine to machine interface for this resource. It was remarked that unreliability of databases is the main problem for GetRef, and a possible solution is that GetRef will cross-search the databases and then direct the user to the native interface to view the results.
Authentication was the next topic of discussion. Concern was expressed that a service which had sounded very useful had been plagued by authentication problems. There was a question on how GetRef will fit in with the Shibboleth and Athens authentication systems. The response was that GetRef already uses Athens authentication and the enquirer was pointed to the GetRef website, http://edina.ac.uk/getref, which links to the GetRef service. There was discussion on whether it was possible to run a search and then log in to an individual service. The answer was that GetRef will have a guest login, although PubMed and Zetoc are currently the only free services within GetRef, and that EDINA cannot search Athens targets without the user's Athens credentials, as EDINA machines do not necessarily have third party access. It was added that it may be possible to use IP access. GetRef has a University of Edinburgh IP address, but in the case of the CSA database which performs IP checking and non-Athens username and password authentication, the GetRef machine is now IP neutral and can access the CSA database. It was confirmed that GetRef will not be loaded onto other people's machines. It was added that IP access is a problem for many e-journals, and EDINA is in consultation with the e-journal vendors to find a solution. A comment was made that Web of Science's IP access is very popular with users.
The discussion turned to the question of embedding searches within portals or e-learning systems such as WebCT and BlackBoard. Xgrain (GetRef's previous name) had a strong Learning and Teaching aspect. At the moment GetRef's learning objects are mainly about information and training for the user. There are authentication issues with embedded searches, but it was agreed that it would be functionality worth investigating. It was added that GetRef was designed so it could be a stand-alone portal or embedded in other application such as the JISC subject portals. GetRef and GetCopy are primarily concerned with journal articles, but GetRef can be incorporated in a VLE (virtual learning environment), web page, portal, either as a call to EDINA who will do the web configuration (the institution must do profiling), or the institution can customize GetRef. There is an existing GetRef teaching module available on the EDINA web site.
It was mentioned that similar commercial projects also cross-search Google and other search engines, but concluded that the lowest common denominator is lowered if the focus of the cross-search is broadened.
There was discussion of whether GetRef could support multiple profiles so that a user would be presented with a focused list of resources, based on their area of study, or if it was better to present the user with a broader set of search results and allow him to refine his search to find the results he is interested in. It was argued that there is a blurring of subject areas nowadays and that a resource that might not be considered relevant could contain a useful reference. It was added that GetRef encourages good searching practice and wants to promote the use of A&I databases that might not otherwise be fully utilized. Another view was expressed that users are focused and want to get directly to the relevant information. It was further added that users find Google an attractive service as they are given access to a broad selection of results, but the importance of knowing about the enquirer, where he comes from and what he is interested in, was also stressed. There was a suggestion that it would be more useful to order resources by relevance to the user, rather than to exclude results from certain resources. It was concluded that the aim of GetRef was to perform the same task as a reference interview with a librarian.