Crossref’s Similarity Check service is used by our members to detect text overlap with previously published work that may indicate plagiarism of scholarly or professional works. Manuscripts can be checked against millions of publications from other participating Crossref members and general web content using the iThenticate text comparison software from Turnitin.
It’s me! Back in January I wrote, The one constant in Crossref’s 20 years has been change. This continues to be true, and the latest change is that I’m happy to say that I will be staying on as Executive Director of Crossref. At the recent Crossref board meeting, I rescinded my resignation and the board happily accepted this.
Please help us welcome new faces at Crossref! Martyn, Sara, Laura, and Mark joined us very recently and we are happy they’re with us. Both Martyn and Sara have joined the Product team and this has given us the chance to reorganize the team into the following groups: content registration, scholarly stewardship, scholarly impact, metadata retrieval, and UX/UI leadership. Laura joined the Finance and Operations team to help make the billing process simple for our members. Mark joins the Technology team and one of his projects will be improving the Event Data service.
It is exciting to already see the impact of your contributions and look forward to what’s to come!
2020 hasn’t been quite what any of us had imagined. The pandemic has meant big adjustments in terms of working; challenges for parents balancing childcare and professional lives; anxieties and tensions we never had before; the strain of potentially being away from co-workers, friends, and family for a prolonged period of time. Many have suffered job losses and around the world, many have sadly lost their lives to the virus.
The DOI error report is used for making sure your DOI links go where they’re supposed to. When a user clicks on a DOI that has not been registered, they are sent to a form that collects the DOI, the user’s email address, and any comments the user wants to share.
We compile the DOI error report daily using those reports and comments, and email it to the technical contact at the member responsible for the DOI prefix as a .csv attachment. If you would like the DOI error report to be sent to a different person, please contact us.
The DOI error report .csv file contains (where provided by the user):
DOI - the DOI being reported
URL - the referring URL
REPORTED-DATE - date the DOI was initially reported
USER-EMAIL - email of the user reporting the error
We find that approximately 2/3 of reported errors are ‘real’ problems. Common reasons why you might get this report include:
you’ve published/distributed a DOI but haven’t registered it
the DOI you published doesn’t match the registered DOI
a link was formatted incorrectly (a . at the end of a DOI, for example)
a user has made a mistake (confusing 1 for l or 0 for O, or cut-and-paste errors)
What should I do with my DOI error report?
Review the .csv file attached to your emailed report, and make sure that no legitimate DOIs are listed. Any legitimate DOIs found in this report should be registered immediately. When a DOI reported via the form is registered, we’ll send out an alert to the reporting user (if they’ve shared their email address with us).
I keep getting DOI error reports for DOIs that I have not published, what do I do about this?
It’s possible that someone is trying to link to your content with the wrong DOI. If you do a web search for the reported DOI you may find the source of your problem - we often find incorrect linking from user-provided content like Wikipedia, or from DOIs inadvertently distributed by members to PubMed. If it’s still a mystery, please contact us.