Publicaciones
Compartiendo Conocimiento
En esta sección compartimos las publicaciones y comunicaciones realizadas por nuestro grupo de investigación. Aquí puedes encontrar los resultados de nuestros estudios, presentaciones y artículos que contribuyen al desarrollo del conocimiento en nuestro campo.
ACCEDE A NUESTRAS PUBLICACIONES EN GOOGLE SCHOLAR
A continuación, te listamos nuestras publicaciones académicas pero si quieres puedes acceder también desde Google Scholar.
Font-Julian, Cristina I; Orduña-Malea, Enrique; Codina, Lluís
ChatGPT Search as a tool for scholarly tasks: evolution or devolution? Artículo de revista
En: Infonomy, vol. 2, no 5, pp. 15, 2024, ISSN: 2990-2290.
Resumen | Enlaces | BibTeX | Etiquetas: A-GEO, Academic Generative Engine Optimization, Academic Tasks, AI, ChatGPT, ChatGPT Search, Generative Artificial Intelligence, Information Retrieval, Link Analysis, Narrative Synthesis, Quantitative vs. Qualitative, Scholarly Tasks, Search Engines, Web Search
@article{chatgpt-search,
title = {ChatGPT Search as a tool for scholarly tasks: evolution or devolution?},
author = {Cristina I Font-Julian and Enrique Orduña-Malea and Lluís Codina},
doi = {https://doi.org/10.3145/infonomy.24.059},
issn = {2990-2290},
year = {2024},
date = {2024-11-26},
urldate = {2024-11-26},
journal = {Infonomy},
volume = {2},
number = {5},
pages = {15},
abstract = {ChatGPT Search was launched on October 31 by OpenAI as a new AI-powered search engine. Among its features, it stands out for its ability to retrieve information from various online sources, including scholarly databases, which potentially allows the use of this tool for academic tasks, both quantitative and qualitative. To test its features, five academic tasks are designed: two quantitative (collecting hit count estimates from Google Search and scraping bibliometric indicators from ResearchGate); two qualitative tasks (performing a narrative synthesis of an academic topic and generating a brief academic author profile), and a mixed task (identifying, collecting and describing a list of publications from Google Scholar Profiles). The results show the inability of ChatGPT Search to conduct quantitative tasks correctly, fabricating the results (hallucination). Qualitative tasks are performed with better results; however, errors are detected, which prevent recommending the tool without manual analysis and refinement. Finally, the ability to generate links to scientific publications can open up competition among academic sites to be mentioned in the ChatGPT Search responses, giving rise to Academic Generative Engine Optimization (A-GEO).},
keywords = {A-GEO, Academic Generative Engine Optimization, Academic Tasks, AI, ChatGPT, ChatGPT Search, Generative Artificial Intelligence, Information Retrieval, Link Analysis, Narrative Synthesis, Quantitative vs. Qualitative, Scholarly Tasks, Search Engines, Web Search},
pubstate = {published},
tppubtype = {article}
}
Orduña-Malea, Enrique; Alonso-Arroyo, Adolfo; Ontalba-Ruipérez, José-Antonio; Catalá-López, Ferrán
Evaluating the online impact of reporting guidelines for randomised trial reports and protocols: a cross-sectional web-based data analysis of CONSORT and SPIRIT initiatives Artículo de revista
En: Scientometrics, vol. 128, no 1, pp. 407–440, 2023, ISSN: 1588-2861.
Resumen | Enlaces | BibTeX | Etiquetas: Altmetrics, Article-Level Metrics, Clinical Trials, CONSORT, Link Analysis, Online Impact, Reporting Guidelines, Scientific Impact, SPIRIT, Webometrics
@article{Orduña-Malea2022,
title = {Evaluating the online impact of reporting guidelines for randomised trial reports and protocols: a cross-sectional web-based data analysis of CONSORT and SPIRIT initiatives},
author = {Enrique Orduña-Malea and Adolfo Alonso-Arroyo and José-Antonio Ontalba-Ruipérez and Ferrán Catalá-López},
doi = {10.1007/s11192-022-04542-z},
issn = {1588-2861},
year = {2023},
date = {2023-01-06},
urldate = {2023-01-06},
journal = {Scientometrics},
volume = {128},
number = {1},
pages = {407--440},
publisher = {Springer Science and Business Media LLC},
abstract = {Reporting guidelines are tools to help improve the transparency, completeness, and clarity of published articles in health research. Specifically, the CONSORT (Consolidated Standards of Reporting Trials) and SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) statements provide evidence-based guidance on what to include in randomised trial articles and protocols to guarantee the efficacy of interventions. These guidelines are subsequently described and discussed in journal articles and used to produce checklists. Determining the online impact (i.e., number and type of links received) of these articles can provide insights into the dissemination of reporting guidelines in broader environments (web-at-large) than simply that of the scientific publications that cite them. To address the technical limitations of link analysis, here the Debug-Validate-Access-Find (DVAF) method is designed and implemented to measure different facets of the guidelines’ online impact. A total of 65 articles related to 38 reporting guidelines are taken as a baseline, providing 240,128 URL citations, which are then refined, analysed, and categorised using the DVAF method. A total of 15,582 links to journal articles related to the CONSORT and SPIRIT initiatives were identified. CONSORT 2010 and SPIRIT 2013 were the reporting guidelines that received most links (URL citations) from other online objects (5328 and 2190, respectively). Overall, the online impact obtained is scattered (URL citations are received by different article URL IDs, mainly from link-based DOIs), narrow (limited number of linking domain names, half of articles are linked from fewer than 29 domain names), concentrated (links come from just a few academic publishers, around 60% from publishers), non-reputed (84% of links come from dubious websites and fake domain names) and highly decayed (89% of linking domain names were not accessible at the time of the analysis). In light of these results, it is concluded that the online impact of these guidelines could be improved, and a set of recommendations are proposed to this end.},
keywords = {Altmetrics, Article-Level Metrics, Clinical Trials, CONSORT, Link Analysis, Online Impact, Reporting Guidelines, Scientific Impact, SPIRIT, Webometrics},
pubstate = {published},
tppubtype = {article}
}