IN SILICO
ANALYSIS TECHNIQUES (BIOINFORMATICS) - LITERATURE RESEARCHES
Reading a newspaper on a personal digital assistant (PDA) rather than
as traditional newsprint reduces the amount of CO2 pumped
into
the atmosphere by a factor of between 32 and 140, and cuts emissions of
oxides of nitrogen and sulphur by several orders of magnitude. And
business
delegates can reduce all 3 emissions by up to 1,000-folds if they
teleconference
instead of travelling to overseas meetingsref.
the
standard format of original research prevails of :
abstract
introduction
materials and methods
results
discussion
acknowledgements : who is Olivier
Danvy? He is the most thanked person in computer science, according
to an automated analysis of 335,000 acknowledgements within the lab's CiteSeer
archive of computer-science papers. The technique, which relies on
text-mining
software, offers a new way to investigate the influence of individuals,
research agencies and companies within different areas of science.
Until
now, research on acknowledgements has been hampered by the lack of a
data
repository to study. In contrast, citation statistics (which track
which
papers are referenced by others) have been manually compiled over the
past
4 decades by the company Thomson ISI
in Philadelphia. Acknowledgement analysis could allow this type of
inspirational
or motivational person to be more routinely appreciated. Experts are
divided
over the significance of the new approach. Acknowledgements are more
slippery
than citations - there are many reasons to acknowledge somebody,
different
countries and fields have different traditions of who they acknowledge,
and there is no requirement to mention everyone who deserves
recognition.
Peer review requires you to incorporate the latest and relevant
citations.
However no one is checking whether all contributors are referenced in
the
acknowledgements. Social scientists have already tackled some of these
problems. They have divided acknowledgements into 6 main types,
including
support from funders, and the, perhaps more tantalizing, Danvy-type
'conceptual'
support of individuals. This type of categorization is essential in any
analysis of acknowledgements, otherwise you'll get a mishmash. This
would
identify connections between groups of collaborators who might
otherwise
appear isolated.
salami slicing : a redundant
publication
is one which duplicates previous, simultaneous, or future publications
by the same author or group or, alternatively, could have been combined
with the latter into one paper. As there is no information about the
extent
of this problem in the surgical literature, we set out to assess the
incidence,
spectrum, and salient characteristics of redundant publications in 3
leading
surgical journals
almost 1 in every 6 original articles published in leading surgical
journals
represents some form of redundancyref
a 2003 survey of opthalmology journals estimated that at least 1.5% of
all papers are duplicatesref.
Some researchers seem to have perfected the art: a study released last
month identified two papers that had each been published 5 timesref
! While most scientists agree on what constitutes scientific
chicanery
and
malfeasanceref
and an Office of Research Integrity
(ORI)
has existed since 1992, plagiarism remains a serious threat to
scientific
integrity, with no tangible remedies on offerref.
Despite
this, US institutions are permitted to conduct inquiries under
a veil of secrecy (cleverly disguised as privacy), and leniency in
interpretation
of the guidelines ensures that inquiries can be handled without
compulsory
disclosure to ORIref.
University
compliance documents are replete with inconsistencies, misinformation,
and a tendency to obscure the true scope of the misconduct problem,
making
independent assessments difficult. Allegations of misconduct are on the
rise according to ORI records. PubMed keyword searching on 'scientific
misconduct' returns an average of 163 citations per year since 1992
(when
ORI was set up) and 64 per year in the decade prior to 1992. Before
1992,
an average of seven articles on scientific plagiarism were published
annually,
rising to 24 articles per annum since 1992. In fact, the number of
articles
in both categories started to rise in 1990-1991, so the increase is not
directly attributable to the setting up of ORI. Further compounding the
problem is the ineffectiveness of guidelines in clear-cut instances of
misconduct. Universities can apparently comply with ORI guidelines yet
fail to reprimand professors who steal ideas from junior researchers
without
apportioning credit. This is effectively the sanctioning of plagiarism,
and measures to prevent it will be ineffective for as long as the
authority
to enforce them is vested in the offending institution. Protection for
junior researchers is woefully inadequate, despite much-needed reform
being
tabled. Unfavorable outcomes coupled with a fear of retaliation
dissuades
most junior whistleblowers from ever making allegations, and legal
options
are limited for resource-restricted students contemplating litigation
against
attorney-laden universities.
Fair use (The Visual Artists Rights Act of 1990 amended section 107 by
adding the reference to section 106A. Pub. L. No. 101-650, 104 Stat.
5089,
5132. In 1992, section 107 was also amended to add the last sentence.
Pub.
L. No. 102-492, 106 Stat. 3145.)
Notwithstanding the provisions of sections 106 and 106A, the fair use
of a copyrighted work, including such use by reproduction in copies or
phonorecords or by any other means specified by that section, for
purposes
such as criticism, comment, news reporting, teaching (including
multiple
copies for classroom use), scholarship, or research, is not an
infringement
of copyright. In determining whether the use made of a work in any
particular
case is a fair use the factors to be considered shall include —
(1) the purpose and character of the use, including whether such use is
of a commercial nature or is for nonprofit educational purposes;
(2) the nature of the copyrighted work;
(3) the amount and substantiality of the portion used in relation to
the
copyrighted work as a whole; and
(4) the effect of the use upon the potential market for or value of the
copyrighted work.
The fact that a work is unpublished shall not itself bar a finding of
fair
use if such finding is made upon consideration of all the above
factors.
Reproduction by libraries and archives (The Copyright
Amendments Act of
1992 amended section 108 by repealing subsection (i) in its entirety.
Pub.
L. No. 102-307, 106 Stat. 264, 272. In 1998, the Sonny Bono Copyright
Term
Extension Act amended section 108 by redesignating subsection (h) as
(i)
and adding a new subsection (h). Pub. L. No. 105-298, 112 Stat. 2827,
2829.
Also in 1998, the Digital Millennium Copyright Act amended section 108
by making changes in subsections (a), (b) and (c). Pub. L. No. 105-304,
112 Stat. 2860, 2889)
(a) Except as otherwise provided in this title and notwithstanding the
provisions of section 106, it is not an infringement of copyright for a
library or archives, or any of its employees acting within the scope of
their employment, to reproduce no more than one copy or phonorecord of
a work, except as provided in subsections (b) and (c), or to distribute
such copy or phonorecord, under the conditions specified by this
section,
if —
(1) the reproduction or distribution is made without any purpose of
direct
or indirect commercial advantage;
(2) the collections of the library or archives are (i) open to the
public,
or (ii) available not only to researchers affiliated with the library
or
archives or with the institution of which it is a part, but also to
other
persons doing research in a specialized field; and
(3) the reproduction or distribution of the work includes a notice of
copyright
that appears on the copy or phonorecord that is reproduced under the
provisions
of this section, or includes a legend stating that the work may be
protected
by copyright if no such notice can be found on the copy or phonorecord
that is reproduced under the provisions of this section.
(b) The rights of reproduction and distribution under this section
apply
to three copies or phonorecords of an unpublished work duplicated
solely
for purposes of preservation and security or for deposit for research
use
in another library or archives of the type described by clause (2) of
subsection
(a), if —
(1) the copy or phonorecord reproduced is currently in the collections
of the library or archives; and
(2) any such copy or phonorecord that is reproduced in digital format
is
not otherwise distributed in that format and is not made available to
the
public in that format outside the premises of the library or archives.
(c) The right of reproduction under this section applies to three
copies
or phonorecords of a published work duplicated solely for the purpose
of
replacement of a copy or phonorecord that is damaged, deteriorating,
lost,
or stolen, or if the existing format in which the work is stored has
become
obsolete, if —
(1) the library or archives has, after a reasonable effort, determined
that an unused replacement cannot be obtained at a fair price; and
(2) any such copy or phonorecord that is reproduced in digital format
is
not made available to the public in that format outside the premises of
the library or archives in lawful possession of such copy.
For purposes of this subsection, a format shall be considered obsolete
if the machine or device necessary to render perceptible a work stored
in that format is no longer manufactured or is no longer reasonably
available
in the commercial marketplace.
(d) The rights of reproduction and distribution under this
section apply
to a copy, made from the collection of a library or archives where the
user makes his or her request or from that of another library or
archives,
of no more than one article or other contribution to a copyrighted
collection
or periodical issue, or to a copy or phonorecord of a small part of any
other copyrighted work, if —
(1) the copy or phonorecord becomes the property of the user, and the
library
or archives has had no notice that the copy or phonorecord would be
used
for any purpose other than private study, scholarship, or research; and
(2) the library or archives displays prominently, at the place where
orders
are accepted, and includes on its order form, a warning of copyright in
accordance with requirements that the Register of Copyrights shall
prescribe
by regulation.
(e) The rights of reproduction and distribution under this section
apply
to the entire work, or to a substantial part of it, made from the
collection
of a library or archives where the user makes his or her request or
from
that of another library or archives, if the library or archives has
first
determined, on the basis of a reasonable investigation, that a copy or
phonorecord of the copyrighted work cannot be obtained at a fair price,
if —
(1) the copy or phonorecord becomes the property of the user, and the
library
or archives has had no notice that the copy or phonorecord would be
used
for any purpose other than private study, scholarship, or research; and
(2) the library or archives displays prominently, at the place where
orders
are accepted, and includes on its order form, a warning of copyright in
accordance with requirements that the Register of Copyrights shall
prescribe
by regulation.
(f) Nothing in this section —
(1) shall be construed to impose liability for copyright infringement
upon
a library or archives or its employees for the unsupervised use of
reproducing
equipment located on its premises: Provided, That such equipment
displays
a notice that the making of a copy may be subject to the copyright law;
(2) excuses a person who uses such reproducing equipment or who
requests
a copy or phonorecord under subsection (d) from liability for copyright
infringement for any such act, or for any later use of such copy or
phonorecord,
if it exceeds fair use as provided by section 107;
(3) shall be construed to limit the reproduction and distribution by
lending
of a limited number of copies and excerpts by a library or archives of
an audiovisual news program, subject to clauses (1), (2), and (3) of
subsection
(a); or
(4) in any way affects the right of fair use as provided by section
107,
or any contractual obligations assumed at any time by the library or
archives
when it obtained a copy or phonorecord of a work in its collections.
(g) The rights of reproduction and distribution under this section
extend
to the isolated and unrelated reproduction or distribution of a single
copy or phonorecord of the same material on separate occasions, but do
not extend to cases where the library or archives, or its employee —
(1) is aware or has substantial reason to believe that it is engaging
in
the related or concerted reproduction or distribution of multiple
copies
or phonorecords of the same material, whether made on one occasion or
over
a period of time, and whether intended for aggregate use by one or more
individuals or for separate use by the individual members of a group; or
(2) engages in the systematic reproduction or distribution of single or
multiple copies or phonorecords of material described in subsection
(d):
Provided, That nothing in this clause prevents a library or archives
from
participating in interlibrary arrangements that do not have, as their
purpose
or effect, that the library or archives receiving such copies or
phonorecords
for distribution does so in such aggregate quantities as to substitute
for a subscription to or purchase of such work.
(h)
(1) For purposes of this section, during the last 20 years of any term
of copyright of a published work, a library or archives, including a
nonprofit
educational institution that functions as such, may reproduce,
distribute,
display, or perform in facsimile or digital form a copy or phonorecord
of such work, or portions thereof, for purposes of preservation,
scholarship,
or research, if such library or archives has first determined, on the
basis
of a reasonable investigation, that none of the conditions set forth in
subparagraphs (A), (B), and (C) of paragraph (2) apply.
(2) No reproduction, distribution, display, or performance is
authorized
under this subsection if —
(A) the work is subject to normal commercial exploitation;
(B) a copy or phonorecord of the work can be obtained at a reasonable
price;
or
(C) the copyright owner or its agent provides notice pursuant to
regulations
promulgated by the Register of Copyrights that either of the conditions
set forth in subparagraphs (A) and (B) applies.
(3) The exemption provided in this subsection does not apply to any
subsequent
uses by users other than such library or archives.
(i) The rights of reproduction and distribution under this section do
not
apply to a musical work, a pictorial, graphic or sculptural work, or a
motion picture or other audiovisual work other than an audiovisual work
dealing with news, except that no such limitation shall apply with
respect
to rights granted by subsections (b) and (c), or with respect to
pictorial
or graphic works published as illustrations, diagrams, or similar
adjuncts
to works of which copies are reproduced or distributed in accordance
with
subsections (d) and (e).
simultaneous submission of articles to more than one journal
: most
medical journals do not allow it. The need for sequential submission is
an important factor in delaying the publication of research. We propose
that journals should allow authors to submit to two or more journals at
the same time. This would lead to greater competition among journals
and
shorten publication delay, which would benefit both patients and
authors.
Timely publication of research findings is crucial because delays will
have a harmful effect on patients' health. In a review of AIDS trials,
Ioannidis found a delay of between 1.7 and 3 years between study
completion
and publication, with negative trials taking significantly longer to be
publishedref.
Furthermore,
a study looking at economic evaluations found that on average
the economic results were published two years after the clinical resultsref.
Morally,
as well as ethically, all those involved in the research process
have a duty to report their findings as quickly as possible. An
important
barrier to early publication and dissemination is often the researcher.
Many researchers take too long to write up their findings. However,
another
big factor is sequential submission, whereby authors are allowed to
submit
to only one journal at a time. Authors intending to submit a manuscript
that they consider to be of high quality and general appeal may
consider
a general medical journal (BMJ, JAMA, Lancet, New England Journal of
Medicine,
etc). These journals have a fairly rapid turnaround. Even so, unless
the
journal considers the paper for its "rapid" publication section, a
decision
usually takes six to eight weeks. If the decision is positive (usually
subject to amendment), the study is then published within a few months
of the final manuscript being received. However, many papers are not
accepted
by the first journal and resubmission to a second or third journal is
required.
Ironically, in our experience, the most interesting and
methodologically
sound papers are often delayed the most as these are usually more
likely
to be sent to highly cited and competitive journals. On the other hand,
many authors overestimate the value of their work and aim too high and
therefore contribute to the delay in publication. Nevertheless, it is
not
uncommon for a paper to be rejected 2 or 3 times before it is finally
accepted.
Indeed, major general journals reject most of the papers they receive.
Because our anecdotal experience was that such delays were widespread
we
undertook a small survey of corresponding authors of randomised
controlled
trials. We searched Web of Science with the phrase "randomised
controlled
trial" for a single month (January 2004). We emailed 95 corresponding
authors
asking how many times they had submitted their manuscript before it was
accepted. Of the 40 who replied, about half (18/40) had submitted the
paper
to two or more journals, and for a quarter of those the time to
publication
was 20 months or more (compared with about 12 months for those who
published
in their first journal). This delay will not be entirely the fault of
journals:
some of the authors will have been inefficient in resubmitting their
manuscripts.
Nevertheless, a large proportion will be due to the requirement of
journals
that papers are submitted to one journal at a time. Journals require
sequential
submission for several reasons, one of which may be to reduce
competitive
forces between journals. Altman described various methods journals have
used to maintain an advantage in order to increase circulation and
profitsref1,
ref2.
Sequential
submission can be seen in this context. During the time an article
is under submission to a journal it cannot be sent elsewhere for
possible
publication; in effect, the journal is holding a temporary monopoly on
the paper. If the paper is rejected, the journal will suffer some loss
in terms of the time and costs of peer and editorial review.
Researchers
and research consumers also lose from rejection. Delays in publication
can adversely affect researchers' careers ("publish or perish")
and their institutions' financial status, as distribution of public
research
funding in the United Kingdom is partly decided by a research
assessment
exercise. Research consumers (patients, doctors, and policy makers)
lose
out because the results of effective or ineffective treatments remain
unknown.
One method suggested some years ago that could address this problem is
multiple submissionref.
Multiple
submission is different from duplicate publication in that if
the same manuscript were sent to two different journals and was
accepted
by one, the submission would then be withdrawn from the other. Multiple
submission has a potential benefit of reducing the delay incurred by
sequential
submission. For example, an author might submit a paper to a general
journal
and a specialist publication. If the general journal accepted the paper
for publication, it could be withdrawn from the specialist journal. To
avoid encouraging duplicate publication, journals adopting a multiple
submission
policy could insist that authors inform them of the other journals that
the article has been sent to. Acceptance or rejection letters could be
sent not only to the authors but also to the other journals. This would
prevent duplicate publication and also stop authors from waiting until
they received a better offer from another journal. As well as speeding
up publication of important findings, such a system would also lead to
competition between journals for the best articles. Awareness that
competing
journals were also looking at the paper would provide a strong
incentive
to rapidly peer review the manuscript and make a final decision. This
effect
could increase the speed of peer review and publication. Indeed, early
entrants to such a competition would, in our view, benefit from
receiving
higher quality submissions. For example, if the Lancet and BMJ were the
only general journals to allow multiple submission, many authors would
be tempted to prioritise those two journals before other general
journals.
Although journals could compete in such a system, they might also
collaborate.
For example, the BMJ might collaborate with the British Journal of
Obstetrics
and Gynaecology. Authors with a manuscript on obstetrics that could
appeal
to a general medical readership would submit the paper to both journals
simultaneously. If the BMJ thought that the paper was not of sufficient
interest to a general audience then no time would be lost in the paper
being considered by a prestigious specialist journal. Multiple or
simultaneous
submission could introduce valuable competitive forces among journals
for
the best manuscripts. Multiple submission is allowed in some
specialties.
Piron compared his experience of writing and submitting papers to
economics,
finance, maths, and psychology journals, which do not allow multiple
submissions,
with law review journals, which doref.
He
noted that law review journals in the United States had the "fastest
turnaround times of any set of journals on the planet." Journals may
have
other reasons than preventing competition for not allowing multiple
submission.
Multiple submission would increase the workload of journal staff
through
the increased flow of manuscripts. For some journals, the extra
administrative
burden would not be worth while as it may slow down their decision
making
processes and allow a competitor to "scoop" the article. This would
leave
them with the sunk costs of mailing the paper to reviewers etc, without
having had the chance of publishing the paper. Workload would also
increase
for researchers as they will be asked to review more papers. This is a
burden some of us would bear to increase turnaround. Indeed, as a
condition
of allowing multiple submission, journals could set the condition that
one or more of the authors of the submitted paper would agree to review
one of the journal's other recent submissions. Workload for both
journals
and reviewers would fall, however, if a collaborative model of multiple
submission was adopted. In this model, a journal would allow multiple
submission
on the condition that the paper went to a partner journal. Both
journals
could then share the reviewer's reports and one journal's staff could
handle
the administration. Some journals use an author pays system—for
example,
BioMed Central journals. It is feasible that journals could have a
single
(free) submission policy or a multiple (pay) approach. This would allow
the journal to offset some of its increased costs from losing an
article
to a rival but it would also depress the demand for the service.
However,
this approach might be less than ideal given that some organisations
can
better afford to pay than others. Additionally, authors may be more
likely
to pay if they have positive findings than negative results. Allowing
papers
with negative findings to be submitted at no charge might offset this
problem.
Several models of multiple submission exist. Journals could adopt any
of
these, and they might even experiment with different models using a
randomised
trial.
eTBLAST
: a web server to identify expert reviewers, appropriate journals and
similar
publicationsref1,
ref2
International Committee of Medical
Journal
Editors (ICMJE) : journal editors will require companies to register
a
clinical trial in a public database before it starts if they hope
to publish its final results. Trial leaders will need to describe the
condition
the trial aims to treat, which drug will be tested and how the trial
will
measure the success or failure of the drug. The policy is intended to
solve
a problem that has long plagued medical research: if a drug fails in a
clinical trial, a company gains little from publishing that result. So
the published articles as a whole include many articles about a drug's
success, but few about its failures. As a result, doctors get an
unbalanced
picture about the true effectiveness of drugs. A study in 2003 looking
at 500 cancer clinical trials found that 26% failed to be fully
published
5 years after preliminary data was first presented at a leading cancer
conference. If only positive results are published this can distort
medical
literature and leave doctors thinking a treatment is more effective
than
it actually is : this in turn can affect the validity and findings of
subsequent
reviews, treatment decisions and clinical practice guidelines. 12
international
journals, including the New England Journal of Medicine, The Lancet,
and
the Journal of the American Medical Association, published the
statement
online
The Journal of Interferon & Cytokine Research (a 1900-subscriber
publication
with 225 manuscripts per year, 25 to 30% of which ask for the
guarantee)
started making the offer in 1997 out of frustration of waiting months
to
get your latest research manuscript peer reviewed : send in
$125
with your paper, and you'll get your review in just 14 days,
guaranteed,
or your money back. Both peer reviewers receive $50 if they meet the
2-week
deadline. The other $25 goes to pay for extra overnight shipping
charges.
If there are lots of competing research teams that are working in
similar
areas, those folks have special incentives to publish first : in some
fields,
the citation half-life can be as short as a year or two. Some authors
may
pay the $125 because it's not their money they're spending; it's their
grant money. If people were paying it out of pocket, I think they'd
think
a little different. If you choose not to pay the fee, your manuscript
will
probably get reviewed within 30 days. That's relatively fast; most
journals
try to get the author a decision within about 3 months. If revisions
are
necessary, most can decide within 6 months
The editors in chief, deputy editors, managing editors, and editorial
advisory
boards who control scientific publication – collectively known as gatekeepers
– exert a special influence on the orchestration of international
research
activity. The selection of journal gatekeepers is a self-organizing
process
that science has developed over the last three centuries. An invitation
to serve as a gatekeeper is both a distinction and reward. But the
process
has skewed gatekeeper demographics, as we found when we built and
evaluated
a database of international core journal gatekeepers in 2003. We
were trying to figure out whether counting such gatekeepers would be
correlated
with the trends in counts of journal papers and citations. In our
database,
science journals were defined as "international" if their editorial
boards
included scientists from at least eight countries, regardless of the
journal
title used. The "international" label in the title of some journals may
hide what is really only a national one. On the other hand, for
example,
the editorial board of the American Heart Journal includes not only
US-based
scientists but also others, mostly from ten European countries. The
current
database contains data for 240 core journals from 12 science fields,
chosen
by the Glänzel and Schubert classification systemref,
and includes the top 20 ranked by ISI's journal impact factor in each
of
the fields. The total number of analyzed gatekeepers can be considered
as statistically significant when they are compared to indicators based
on papers and/or citations. Results for 2003 and includes the number
and
percentage of gatekeepers for 10 countries. It also shows the number of
papers in 12 science fields published and their citations, from 2000 to
2002, of papers published in 2000. The top 10 countries account for
about
86 % of the gatekeepers. With few exceptions the number of US
gatekeepers
dominates the world of science to an extent that is considerably higher
than their share of publications and citations.
The prevailing dominance of the USA in all fields is also clearly
visible
in the number of editors-in-chiefs of the investigated core journals in
science and in 12 science fields
The dominance of the US gatekeepers, as demonstrated by our
measurements,
is not a conspiracy with some hidden intentions, but a consequence of
the
self organizing nature of science. Nothing needs to be done. However,
it
is an important reflection of the self-organizing mechanism which has
allowed
US gatekeepers to have a decisive influence on what, when and where
worldwide
research is published.
frauds : faked data / data
fabrication /
fabricated data-sets (making up data values) and falsification
(changing data values).
John Darsee : in 1983, an investigation at Emory University, before he
was appointed a fellow at the Harvard Medical School's Cardiac Research
Laboratoryl, has revealed fabrication of data, unauthorized use of
co-authors'
names, and fictitious collaborators. The Emory investigation was
undertaken
after Harvard notified Emory of its suspicion of Darsee's Harvard
research.
Emory's investigative report focused only on the extent of Darsee's
data
fabrication. A copy of the report was sent immediately to the NIH, but
the university was reluctant to send it to medical licensing boards or
to the hospital where Darsee is currently employedref.
Rather
than prescribe action for Emory alone, the panel recommended seven
procedural steps that should be taken by all NIH-supported research
centers
to guard against similar occurrences. These recommendations are being
reviewed
by NIHref.
The
NIH recommended that Darsee be denied federal research funds for the
next 10 yearsref
Luk van Parijs, 35, a rising star at the Massachusetts Institute of
Technology
(MIT) in the hot field of RNA interference (RNAi) was dismissed last
week
after admitting that he had fabricated and falsified data in grant
applications,
submitted manuscripts, and one published paper, the university reported
in a statement. The California Institute of Technology (Caltech) in
Pasadena
has now begun reviewing two papers published by the researcher, when he
was a postdoc there. Harvard Medical School and Brigham and Women's
Hospital,
where Van Parijs was a graduate student, is also scrutinizing his early
workref.
Kazunari Taira, a 53 years old professor of biochemistry engineering at
the University of Tokyo faked data in an overseas scientific journal in
February 2003 in which he stated that his research team had succeeded
in
having E. coli bacteria produce a human enzyme called Dicer --
so
called as it dices RNA -- by implanting a Dicer gene in a plasmid. It
was
discovered in Jan 2006ref
Jon Sudbo, a Norwegian doctor specialised in mouth cancer at Oslo
hospital,
fabricated the database of a study describing the association between
mouth
cancer and the use of anti-inflammatory drugs that was published in The
Lancet in October 2005. Sudbo published 2 other articles in 2001 and
2004
in the New England Journal of Medecine. The US National Cancer
Institute
gave him a grant of 70 million Norwegian krona (10.5 million dollars,
8.7
million euros) for his research. It was discovered in Jan 2006ref
Statistical methods for the detection of data fabrication in clinical
trialsref1,
ref2,
ref3
many journals contribute to the prevalence of bad science, because,
when the fundamental observation that led to the original publication
cannot
be reproduced, it is nearly impossible to publish a paper documenting
this.
Hence, controversies persist in the literature over many years, simply
because the corrected story either is never published, or is not
published
as prominently as the initial paper. True, there is an extensive
specialist
literature where ambiguous or conflicting results can be addressed in
detail,
but the readership is limited. Some journals, such as Nature, have
mechanisms
for publishing technical comments on published research (Brief
Communications
Arising: online only), but these are few in number and
must adhere to strict criteria. Reviewers of contradictory results
often
ask that the authors explain how the original authors could have
obtained
their results. To quote a recent rejection letter, "an adequate
explanation
for the apparent contradictory findings is not provided". Certainly,
speculative
explanations can be offered for some kinds of experimental differences.
But it is never possible to prove how another lab obtained data that
cannot
be reproduced. One can only be certain of one's own data. This demand
for
explanation creates serious problems in the case of scientific fraud.
In
a minor case, the original authors may have fudged one small set of
data
to 'prove' their theory. In a more serious case, fundamental
observations
cannot be reproduced. Whether this irreproducibility is due to outright
fraud, scientific incompetence or some combination cannot be determined
by the authors who try to reproduce the result and fail. Another
often-made
request of reviewers is that the original experiments be reproduced
exactly.
This sounds reasonable but, in fact, can become an absurd burden. Even
if the methods section were complete and accurate, one can never say
with
certainty that one has reproduced the experimental conditions
precisely.
Instead, the appropriate approach is to design experiments to test the
conclusions of the original paper. If these conclusions are disproved,
then the details of how they were arrived at are not relevant. Of
course,
a contradictory paper should be held to a higher standard than was the
paper it refutes. But all journals must endeavour to correct errors, or
those who perpetrate scientific misconduct (not necessarily outright
fraud)
will be rewarded, and those who try to correct wrong hypotheses in the
proper hegelian manner — thesis, antithesis, synthesis — will be
punishedref.
Research subject to data audit could include studies presenting
possible
risks to public health, or those questioned by a whistleblower or by
peer
review. Others could be subject to random audits. Up to 1% of all
studies
could be audited every 3-5 years, at < 1% of the cost of the
original
study (Accountability Res. 1, 77–83, 1989). Auditing could be done by
an
independent body that would certify the validity of published results.
Sponsoring institutions could choose to publish a transparent analysis
of selected papers on the web. Although these processes might not
eliminate
all fraud or misconduct, they could substantially reduce such unethical
practicesref1,
ref2.
Image manipulation : the Journal
of Cell Biology (JCB) examines all digital images in all accepted
manuscripts
for evidence of manipulation. For black and white images this involves
simple adjustments of brightness and contrast in Photoshop (see figure,
part A). For color images, they use the "Levels" adjustment in
Photoshop
to compress the tonal range and visualize dim pixels (see figure part
B).
They have created standards for acceptable manipulationref,
and
they perform an initial investigation with the authors if they believe
those standards may have been violated. To do so, they request that
they
submit the original data to the journal for comparison to the prepared
figure(s) in question. Editors of some high-profile biomedical journals
have recently voiced the opinion that journals cannot be investigative
bodiesref
(L.K. Altman, W.J. Broad, "Global trend: More science, more fraud," The
New York Times, Dec. 20, 2005). Clearly, journal editors cannot get
access
to authors' lab notebooks, but they certainly have a right to examine
the
raw data corresponding to any information presented in a manuscript,
especially
when it is possible to determine empirically whether the raw data have
been manipulated. This adds another layer to the review process beyond
traditional peer review. During their 3.5 years of screening experience
at the JCB, they have had to revoke the acceptance of 1% of papers that
passed peer review because they detected fraudulent image manipulation
that affected interpretation of the data. They do not take such cases
lightly.
Four editors must agree on a determination of fraudulent manipulation:
the managing editor and three academic editors, including the
monitoring
editor, a senior editor, and the editor in chief. They do not consider
the element of intent; acceptance is revoked if any conclusion in a
paper
is called into question by the manipulation. 25% of all their accepted
manuscripts have at least one figure that must be remade because they
detect
"inappropriate" manipulation, that is, the manipulation does not affect
the interpretation of the data, but it violates their guidelines for
presenting
image data. This indicates a widespread misunderstanding of the line
between
appropriate and inappropriate manipulation, which will have to be
addressed
during the training of students in the responsible conduct of research.
In almost all cases, they have been able to resolve incidents of image
manipulation themselves; only on very rare occasions have they needed
to
request the help of an institutional investigative body. One of the
supplemental
figures that Hwang and colleagues published in the now infamous stem
cell
cloning paperref
contained manipulated images. The image in the figure, part B is from
the
third row of Supplemental Figure S1B in that paper. It purports to show
negative staining for a particular cell-surface marker in 4 different
cell
lines. A simple adjustment of tonal range clearly shows that the 2
middle
images are identical. The minor differences in pixel structure are due
to image compression. It is likely that they would have identified this
duplication with their routine screen. It is important to note,
however,
that this would only have led us to request the original data from the
authors, who could have dishonestly claimed to have made a clerical
error
and provided different images. This illustrates a potential limitation
of their investigative capabilities, but editors will be surprised at
how
easily a deception can unravel if they start asking questions. What
about
other types of data besides image data? It is clearly more difficult to
determine whether numerical data have been misrepresented, fabricated,
or falsified. There are, however, clues to potential problems with
these
data that should raise red flags with peer reviewers and editors. These
include exceedingly small data sets, exceedingly narrow and/or uniform
error bars on a graph, or the presentation of data from a
"representative
experiment."
A) For black and white images, brightness and contrast adjustments
can reveal background inconsistencies that are clues to manipulation.
In
the top panel, the intensity of the band in lane 4 has been adjusted.
In
the bottom panel, lane 5 is a duplicate of lane 2. Note the rectangular
boxes around the manipulated bands. B) For color images, tonal-range
adjustments
can reveal manipulations. The original shows what appears to be
negative
staining in four separate cell lines. In the adjusted image, the two
middle
panels are clearly duplicates. The minor differences in pixel structure
are due to image compression. One possible way to address problems with
numerical data would be to require authors to submit all their raw data
for comparison to the prepared figures. However, doing this comparison
would be much more time consuming than image screening. At a minimum,
journals
should set guidelines for presentation of numerical data that promote
accurate
graphing practices. JCB advocates that journal editors take a proactive
approach to detect potential misconduct and resolve it before
publication.
It is not enough to respond to allegations of misconduct made by
others,
and it is certainly not necessary to pass on all such suspicions to an
investigative body, as others have advocatedref.
Despite
the additional safeguards we have put in place, if someone is
determined
to publish fraudulent data, they may well succeed. It is important to
keep
in mind that our screening can pick up manipulations done to image
files
after they have been acquired, but it will not pick up adjustments to
the
data made during acquisition (e.g., using settings on a microscope).
Even
with these limitations, it is a cop-out for an editor to say that the
review
process can never be perfect, when the publication of fraudulent work
comes
to light (BBC News, "Cancer study patients made up'," Jan. 16, 2006).
Editors
have a responsibility to do what they can to protect the published
record,
and they can now do something beyond peer review. To editors who argue
that they do not have the time or funds to do this kind of screening,
think
about the effort expended in dealing with a high-profile case of
fraudulent
research. The vast majority of biomedical journals are owned by
commercial
publishers, who make considerable profits from them. These companies
should
bear the cost of improving manuscript review as part of the
responsibility
of participating in scholarly publishing. Such enhanced review has the
potential to save considerable time and public funds wasted by
scientists
forced to debunk published fraudulent research, when they could have
been
used instead to make real progress.
Scientific and engineering publications between 1997 and 2001 :
1,265,000 from the USA : 4.3 papers per 1,000 inhabitants (293,800,204
people)
342,000 from the United Kingdom : 5.8 papers (35% more) per 1,000
inhabitants
(59,050,800 people)
Online
periodic
journals on general medicine (specialistic journals
are listed under respective sections; journals regarding immunology are
listed in Journals on basic immunology;
see also Publishers
section)
The scientific community has come to regard impact factors, calculated
by the Institute for Scientific Information (ISI), as providing a
quantitative
and largely objective guide to which journals publish the best
research.
Although many problems can result from naïve reliance on
journal
impact factors as a quality metric (especially when attempting to
compare
different fields)ref,
the
perception of many scientists is that, to get recognition and career
advancement, they must publish in a journal with a good impact factor.
This presents a major obstacle to publishers of new journals, since
even
the best journal won't have a proper impact factor for 3 years. But
this
situation is even worse than it sounds, as the clock only starts
ticking
when ISI starts "tracking" the journal. When does ISI start
tracking
the journal? It depends… According to ISI, the factors it uses to
decide
when to start tracking a journal include:
how many articles the journal publishes
how many competing journals ISI already tracks in the same discipline
the previous citation record of the journal's editorial board
the previous citation record of the authors who publish in the journal
the number of times the journal has been cited in journals that are
already
tracked by ISI
Unfortunately, despite good intentions, this selectivity on ISI's part
has the unintended consequence of concealing the success of new
journals.
A case in point is BMC Bioinformatics. Since it published its first
article
in 2000, this journal has rapidly established itself as one of the most
active and successful in its field :
However, since ISI only began tracking BMC Bioinformatics in 2002,
this will not translate into an official impact factor until June 2005,
when the 2004 impact factors are released. All is not lost, however. It
is possible to calculate an unofficial impact factor for any journal,
even
if it is not officially tracked by ISI, by making use of the
information
in ISI's cited reference database which includes the entire reference
list
of all tracked journal articles, and therefore includes citations of
journals
which are not themselves trackedref.
Using this method, the 2003 impact factor for BMC Bioinformatics can be
estimated as follows:
BMC Bioinformatics articles published in 2001-2
48 (one 'Correction' article ignored for citation analysis
purposes)
2003 citations of these BMC Bioinformatics articles
235 (according to ISI Web of Science cited reference
database)
Unofficial 2003 impact factor for BMC Bioinformatics
Impact factor = 235/48 = 4.896
This "unofficial" 2003 impact factor for BMC Bioinformatics already
compares
very favourably with that of more established journals.
A comparison of 2003 journal impact factors
Journal
2003 impact factor
Genome Research
9.635
Bioinformatics
6.701
Nucleic Acids Research
6.575
Molecular Biology and Evolution
6.050
BMC Bioinformatics
4.896 (estimated)
Journal of Computational Biology
4.600
Genetics
4.276
Molecular Ecology
3.870
Evolution
3.833
Protein Science
3.787
Genomics
3.488
Journal of Molecular Evolution
3.114
Molecular Phylogenetics and Evolution
2.826
Gene
2.754
Genome
1.861
The journals listed are a selection of the journals that most
frequently
publish and/or cite bioinformatics-related articles. Note that the
figure
listed for BMC Bioinformatics is not an official impact factor, but an
estimate, based on ISI's data, of what the impact factor would be, if
it
were calculated. This estimated impact factor places BMC
Bioinformatics
in the top 5% of all journals covered by ISI. Yet an author reviewing
the
2003 Journal Citation Report from ISI would have no idea that the
journal
BMC Bioinformatics was so highly cited. The number of articles on which
this calculation is based is relatively small. The official impact
factor,
which is expected to arrive in mid-2005, may be significantly higher or
lower, but it seems clear that it will be respectable. BMC
Bioinformatics
is not alone in facing this problem: there are many other recently
launched
journals, both from BioMed Central and from other publishers, whose
impressive
citation record is not currently captured by the impact factors listed
in ISI's Journal Citation Report. It is an unfortunate fact that this
may
needlessly dissuade many authors from publishing in these new journals,
and thus may serve to hold back innovation in science publishing. After
many years of having the field of citation analysis largely to itself,
ISI is finally facing the prospect of serious competition. The
increasing
use of standard XML formats by publishers mean that citation analysis
is
no longer a daunting logistical challenge. It is simply a question of
number
crunching. Citation tracking data for Open Access content is already
available
through Citebase, and the
usefulness
of this free service will grow as the amount of Open Access content
increases.
Meanwhile, CrossRef (the full text linking service) is also now
collecting
article reference lists from publishers for 'forward linking' purposesref,
and
these could in future potentially also be used to calculate
impact-factor-like
metrics. In addition Elsevier is now working on Scopus, a bibliographic
database/citation analysis service that the publisher claims will offer
broader journal coverage than ISI. With luck, this competition may give
ISI just the spur it needs. BioMed Central's recommendation is that ISI
should reconsider its policy on citation tracking, and should introduce
a policy of immediately tracking any peer-reviewed journal that meets
basic
quality standards and which can provide reference list data in an
appropriate
form to allow automated analysis. By doing this, ISI would provide a
valuable
impartial service to the scientific community.
Emphasis on where research is published—relying on impact factors to
reward academic work with funding or promotion—is ripping the soul out
of academia. Publications become more important than teaching and the
actual
research itself. In South Asia, job promotion often depends
largely
on the number of research papers published, and some doctors go to
unreasonable
lengths to "persuade" editors to publish their work. The quality of
clinical
work or even of the research itself is less important than the length
of
a citation list on a curriculum vitae. China, too, offers promotion
according
to the number of research papers. There are other systems. Germany, for
example, has an intense hierarchy, where the chief specialist is one
notch
below God—or one notch above—with junior staff promoted on a whim or
shunted
to a dead end post in a flash of irritation. What value research or
academic
excellence in such an environment? Japan has managed to marry these
arbitrary
approaches. In the country's fierce hierarchy, promotion is aided by
applicants
listing journal impact factors beside references in their citation
list.
Candidates boast individual impact factors of, for example, over 100,
somewhere
in the 30s, or a miserable 0.3. Japan's fascination with genomics and
impact
factors is hindering advancement in academia for good clinicians with
little
basic science research experience. Although a correlation existed in
the
likelihood of papers from high impact factor journals being cited in US
evidence based guidelines, journals with low impact factors were also
cited
frequently as providing important evidenceref.
Effect
on readers' knowledge or clinical practice remains unmeasured, they
conclude, and clinical and preventive research is undervalued. Impact
factors
have much to answer for, as do deans, sponsors, government agencies,
and
employment panels who use them as a convenient—but flawed—performance
measure.
How can a score count for so much when it is understood by so few and
its
value is so uncertain? In defence, worshippers of impact factors say we
have no better alternative. Isn't it time for the scientific community
to find one?ref Relative
citation
impact of various study designs in the health sciences
The Hirsch number (h)
is a newly proposed measure of scientific productivity that attempts to
go beyond simple publication-counting. The concept was invented by
Jorge
E. Hirsch of University of California San Diego in 2005. Hirsch
proposed
that an individual's Hirsch number is the largest integer h
such
that he or she has at least h publications with at least h
citations in the refereed literature. The Hirsch number is calculable
using
free Internet databases and serves as an alternative to more
traditional
impact factor metrics which are available for a fee. Because only the
most
highly cited articles contribute to the Hirsch number, its
determination
is a speedy process. Hirsch has demonstrated that h has high predictive
value for whether or not a scientist has won honors like National
Academy
membership or the Nobel Prize. In physics, a moderately productive
scientist
should have an h equal to the number of years of service while
biomedical
scientists tending to have higher valuesref1,
ref2.
open-access (OA) journals (OAJ) : in
Britain,
an inquiry by the House of Commons Science and Technology Committee
concluded
in July 2004 that open access seems viable but requires further
experimentation.
The committee advised the government to require UK-based authors to
publish
articles on their institutions' websites, but the government largely
rejected
the advice on 8 November. In August, 25 Nobel laureates signed a letter
calling on the US government to provide public access to
government-funded
research. As a result, the US National Institutes of Health in
September
invited comments on a proposal to make the results of all federally
funded
research freely available shortly after publication. And in October,
the
German government gave 6.1 million to the Max
Planck Society to develop an open-access system for publications
from
the society's institutes. Meanwhile, 48 biomedical societies united to
oppose open access. On 16 March, they launched a series of Principles
for
Free Access to Science, which backs open access but reserves the right
of publishers to charge for several months after publication. Society
editors
argued that it would be impossible to generate income for other society
activities if they only raised money through payments from authors. The
impact factors of nearly 200
open-access
journals are similar to those of traditional journals in the same
fields,
according to a recent Thomson ISI
report. The 58 open-access medical journals that receive impact factors
fell, on average, at the 40th percentile of all medical
journals,
with all but 11 ranking higher than the 10th percentile. For
life sciences journals, the 37 open-access journals were ranked, on
average,
at the 39th percentile. It is clear that at an overall
macro-economic
level, a switch to Open Access publishing would not negatively impact
research
funding. The cost of the present system of biomedical research
publishing,
with all its inefficiencies and overly generous profit margins, still
only
amounts to about 1-2% of the overall funding for biomedical research
(estimate
from the Wellcome Trust, cited by Public Library of Science in their
submission
to the House of Commons inquiry). There is no reason why the cost of
Open
Access publishing should exceed the cost of the current system, since
the
fundamental process is the same. In fact, Open Access publishers are
leading
the way in using web technology to reduce costs further, so the cost of
Open Access publishing to the scientific community will be
significantly
less than the cost of the system that it replaces. Meanwhile, the
vastly
increased access to research that is delivered by Open Access will
greatly
increase the effectiveness of the research money that is spent, since
all
research builds on what has gone before it, and is needlessly
handicapped
if access to previous research is inconvenient, slow, or impossible. In
short, funders will get more "bang for their buck". At the
micro-economic
level, there will certainly be transitions that need to be carefully
managed
as the Open Access publishing model grows in economic significance.
e.g.
since the total cost of publishing scientific articles is roughly
proportional
to the amount of research to be published, it may well make sense for
the
costs of publishing to be incorporated into research funding grants,
rather
than being covered by library budgets. These are important issues,
which
deserve attention. But these transitional challenges should not be
allowed
to obscure the overall picture which is that with the Open Access
publishing
model the scientific community will pay significantly less, yet receive
vastly more (in terms of access and usability). On 29th April 2004 the
Wellcome
Trust published a report on the economic implications of Open
Access
publishing. The report (Costs
and Business Models in Scientific Research Publishing) indicates
that
Open Access publishing could offer savings of up to 30%, compared to
traditional
publishing models, whilst also vastly increasing the accessibility of
research.
Elsevier's figure of 97% of researchers in the UK having access to
Elsevier
content is misleading. As explained in the small print of their written
submission, this refers to researchers at UK Higher Education
institutions
only, many of which have indeed taken out ScienceDirect subscriptions
as
a part of JISC's "big deal" agreement. However, these researchers do
not
have access to all ScienceDirect content by any means - the subset of
journals
that is accessible varies widely from institution to institution,
meaning
that access barriers are frequently a problem, even for researchers.
The
access situation at institutions which focus primarily on teaching
rather
than research is particularly bad, but Elsevier disguises this by
weighting
each institution according to the number of 'researchers' employed, to
come up with the 97% figure. More fundamentally, the Higher Education
sector
is only one of several sectors carrying out biomedical research in the
UK. Much medical research in the UK goes on within the NHS. Lack of
online
access to subscription-only research content within the NHS is a major
problem, as detailed in a separate
report. Similarly, Elsevier's figures conveniently omit researchers
employed at institutes funded by charities such as the Wellcome Trust
and
Cancer Research UK, and in industry. To say that being able to go to
the
library and request an interlibrary loan is a substitute for having
Open
Access to research articles online is rather like saying that carrier
pigeon
is a substitute for the Internet. Yes - both can convey information,
but
attempting to watch a live video stream with data delivered by carrier
pigeon would be a frustrating business.
Practically, the obstacles to obtaining an article via the interlibrary
loan route are so huge that all but the most determined members of the
public are put off. For those who persist, after a time lag that will
typically
be several weeks, their article may (if they are lucky) finally arrive
in the form of a photocopy. What the user can do with that photocopy is
extremely restricted compared to what they can do with an Open Access
article.
With an online Open Access online article, you can cut and paste
information
from the article into an email. With a photocopy you cannot.
With an Open Access online article, the license agreement explicitly
allows
you to print out as many copies as you like and distribute them as you
see fit. But if you copy and distribute the article you received by
Interlibrary
Loan without seeking appropriate permission from the publisher, you may
well be in violation of copyright law. It is also worth noting that an
increasing fraction of public libraries now offer free or low-cost
Internet
access, making it even more convenient for the public to view Open
Access
research. There is already a vast amount of material on medical topics
available on the Internet, much of which is junk. Can it really be
beneficial
for society as a whole that patients should have access to all the
dubious
medical information on the web, but should be denied access to the
scientifically
sound, peer-reviewed research articles? In some cases, to be sure,
comprehending
a medical research study can be a demanding task, requiring additional
background reading. But patients suffering from diseases are
understandably
motivated to put in the effort to learn more about their conditions, as
the success of patient advocacy groups in the USA has shown. Patients
absolutely
should have the right to see the results of the medical research that
their
taxes have paid for. It is peculiar to hear large commercial publishers
saying that Open Access would be a very good thing for the
pharmaceutical
and other industries, and then claiming that this is a problem with the
Open Access model. The chemical, biotech and pharmaceutical industries
play a major role in the UK economy, and so this argues strongly for
Open
Access.
To say that they do not contribute significantly in terms of publishing
research is inaccurate. Industry publishes a significant amount of
research
itself, and also funds much research within the academic community that
then goes on to be published. It is certainly possible that under an
Open
Access model, institutions (and countries) that publish a lot of
research
would pay a somewhat higher proportion of the cost of publishing than
they
do currently. Since it is the process of publishing the research that
incurs
the lion's share of the costs (with Internet distribution being very
cheap
in comparison), this is the most logical, sustainable way to fund the
publication
process. In contrast, the current situation, in which small
universities
effectively subsidize the cost of publishing the research carried out
at
relatively wealthy research centres, is far more inequitable and
unsustainable.
But in any case, the absolute amount of money expended by the research
institutions will fall, due to the far greater efficiency of Open
Access
publishing. Furthermore, research institutions that support Open Access
will benefit greatly in terms of kudos and influence, due to the
greater
accessibility and visibility of their research. These institutions
would
therefore be cutting off their nose to spite their face to oppose Open
Access on the grounds given above. The assertion being made is,
essentially,
that Open Access publishers have an incentive to publish dubious
material,
in order to increase their revenue from Article Processing Charges.
This
is a very peculiar accusation for a traditional publisher to make given
that in the same evidence session, Elsevier's hefty annual subscription
price increases was justified as follows: "On pricing, we have put
our
prices up over the last five years by between 6.2 per cent and 7.5 per
cent a year, so between six and seven and a half per cent has been the
average price increase. During that period the number of new research
articles
we have published each year has increased by an average of three to
five
per cent a year. [...] Against those kinds of increases we think that
the
price rises of six to seven and a half per cent are justified." [Oral
evidence
to Inquiry, March 1st 2004, Crispin Davis (CEO, Reed Elsevier)]
i.e. Elsevier's primary justification for increasing their subscription
charges (and profits) is that each year they are publishing more
articles.
In which case, if their own argument is to be believed, they face the
exactly
the same conflict of interest as Open Access publishers. Fortunately,
however,
no such conflict of interest exists, for either Open Access or
traditional
publishers. Any scientific journal's success depends on authors
choosing
to submit their research to it for publication. Authors publish
research
in order for the value of their findings to be recognized. The kudos
granted
by a solid publication record is crucial for scientific career
progression.
Authors submit their research to journals with a reputation for
publishing
good science. If a journal had a reputation for publishing poor
science,
it would not receive submissions. Thus the system is inherently
self-correcting.
It should also be noted that many leading journals (both commercial and
not-for-profit) already have page charges and colour figure charges for
authors, in order to defray expenses and to keep subscription costs
down.
Just 2 examples (of many hundreds) are the Proceedings
of the National Academy of Sciences (USA), and Genes &
Development.
So author charges are hardly an unprecedented experiment. It is true
that
commercial publishers have tended in some cases to remove author
charges,
and to commensurately increase subscription fees, since this suits
their
commercial interests in maximizing profits. But it is clear that author
charges pose no fundamental problem to effective peer review. Health
InterNetwork Access to Research Initiative (HINARI), and its sister
initiative, Access to
Global
Online Research in Agriculture (AGORA), are commendable initiatives
and are undoubtedly warmly welcomed by researchers working in the
eligible
countries.
Via these schemes, publishers give some of the poorest countries free
access to some of their journals. In HINARI, twenty-eight publishers
participate,
making a total of more than 2000 journals available for free to some of
the poorest countries (defined as having a per capita annual income of
less than $1000); and at a deep discount for some slightly less
disadvantaged
countries (per capita annual income between $1000 and $3000).
Unfortunately
these schemes offer only a partial solution to the access problems of
the
developing world. The list of eligible countries has many notable
omissions.
It excludes large low-income countries such as India, Pakistan and
Indonesia,
even though these countries have per capita annual incomes of $735 or
less,
and are therefore "low-income" countries according to World Bank
criteria.
Countries such as Brazil and China (which are "lower-middle income"
according
to the World Bank) are also excluded from the eligibility list, even
for
discounts. There is an obvious explanation for these omissions. These
larger
countries have significant research programs, so publishers can
generate
substantial income by selling subscriptions to them. It appears that
traditional
publishers will only offer Open Access to the developing world when
they
can be sure it won't affect their profits. It is therefore clear that
researchers
in developing countries have a huge amount to gain from greatly
expanded
access to the global scientific literature that Open Access publishing
will offer. Certainly, there are challenges that need to be faced to
ensure
that authors in developing countries can publish in Open Access
journals,
but these challenges are by no means insurmountable. Indeed, many
low-income
countries have already started their
own Open Access journals. Meanwhile, BioMed Central currently
offers
a full waiver of the article processing charge to authors in low and
low-middle
income countries. Long term, the scientific community will certainly
find
ways to ensure that scientists in developing countries get the full
benefit
of Open Access, both as readers and as authors. Firstly, sending out
printed
copies of journals to subscribers who pay for them is in no way in
conflict
with the goals of Open Access. Many Open Access journals (such as PLoS
Biology, Journal of Biology and Genome Biology) have print editions.
Wherever
there is a demand for print (from libraries or from individuals) then
print
editions are available to those who wish to pay to receive them, just
as
with a traditional journal. But, far more importantly, by Elsevier's
own
estimate some 30 million people in the UK (and more than half a billion
people worldwide) use the Internet. The wonderful thing about Open
Access
is that any one of those hundreds of millions of people can print out
copies
of any Open Access article, and distribute them to whomever they want.
If you want to get hold of an Open Access article, there are literally
hundreds of millions potential sources. We already see the power of
this
mechanism in action. In the poorest countries in Africa, those
scientists
who are lucky enough to have access to the Internet are downloading
Open
Access articles from BioMed Central's journals (e.g. Malaria Journal),
printing them out in large numbers, and distributing them to their
colleagues
in areas the Internet does not yet reach. They confirm to us that this
makes the research vastly more accessible than research published in
traditional
print-only journals. In contrast, many traditional journals are
received
in print by only a few hundred libraries worldwide. Not only that, the
libraries that hold these print copies are bound by strict rules
governing
what is and is not permissible in terms of copying and redistribution.
To argue that these few hundred printed copies provide greater access
to
research than making articles openly accessible online is, frankly,
ludicrous.
A high quality journal such as Nature would need to charge authors
£10,000-£30,000
in order to move to an Open Access model. But even for Nature, the
figure
of £10,000-£30,000 is wildly off the mark. The calculation
used by Macmillan was as follows: "Very crudely, £30 million of
sales:
we get income of £30 million and we publish 1,000 papers a year.
That is your [£30,000]." [Oral evidence to Inquiry, March 1st
2004,
Richard Charkin (CEO, Macmillan)]. £30,000 is indeed a lot of
money.
But Nature clearly spends nothing like that on each research article
that
it publishes. There are several major problems with the calculation
that
was used: A significant fraction of Nature's £30m revenue is
spent
to commission and produce the non-research-article content of the
journal
(e.g. News & Views articles, book reviews, commentaries, editorials
etc.) This non-research content would continue to drive healthy print
and
online subscription revenue, even if the research articles were made
freely
accessible online. Since the non-research content (the front-matter) is
far more widely read than the research articles themselves, it is far
from
clear whether making the research articles Open Access would have any
negative
impact on subscription revenue. In fact, the opposite can be argued.
For
the same reason, there is no reason to believe that Nature's impressive
advertising revenue would suffer dramatically as a result of Open
Access,
yet they are assumed to fall to zero in Nature's calculation. Part of
the
argument used to justify the high cost per published article is that
Nature
rejects more than 90% of papers submitted, and so has to review more
than
10 papers for every one it publishes, and has to bear the entire cost
of
this. "[Nature] publishes fewer than 10% of the research articles
submitted.
Economics dictates that high quality journals like Nature have a high
unit
cost per paper published, because for every article published more than
ten have been reviewed and de-selected."
Letter to Inquiry, January 13th 2004, Richard Charkin (CEO, Macmillan)
This would indeed be expensive, and it is true that the repeated
peer-reviewing
of rejected papers as they trickle down the journal pyramid is one of
the
worst inefficiencies of the present system. In fact, however, Nature is
not that profligate and had already taken steps to address this issue:
"If a paper cannot be accepted by Nature, the authors are welcome to
resubmit
to Nature Cell Biology. Nature will then release referees' comments to
the editors of Nature Cell Biology with the permission of the authors,
allowing a rapid editorial decision. In cases where the work was felt
to
be of high quality, papers can sometimes be accepted without further
review"
From the HoC website : Thus, if a paper is scientifically sound, but is
not exceptional or fashionable enough to appear in Nature, it may well
be submitted and accepted into one of the next tier of journals in the
Nature stable (Nature Cell Biology, Nature Medicine, Nature Genetics
etc.)
without requiring significant additional editorial work or costs. This
is a very sensible system, and is one that is already in use at BioMed
Central. If an article is rejected for publication in BioMed Central's
top-tier journal, Journal of Biology, but is judged by the reviewers
and
editors to be scientifically sound, the authors may be offered
publication
in one of our more specialist journals. Public Library of Science plans
to operate a similar mechanism as it launches new journals. This
trickle-down
approach benefits authors by avoiding the delays caused by repeated
rounds
of peer-review, and benefits science as a whole by reducing the cost of
the publication process while maintaining quality. Taken together, the
above factors make it clear that the actual figure that would be
necessary
as an author charge for Nature would most likely be vastly lower than
the
suggested figure of £10,000-£30,000. It is even possible
that
Nature could operate at a profit while offering Open Access to research
content and making no author charge whatsoever. Elsevier cannot
realistically
claim to have led the transition of scientific publishers from print to
online – that was done by smaller, more nimble operators such as
HighWire
Press (which launched the Journal of Biological Chemistry in 1995) and
BioMedNet (which made the Current Opinion series of journals available
online in full text form back in 1994). Of the large commercial
publishers,
Academic Press started IDEAL in 1995, years before ScienceDirect.
Similarly,
Elsevier's figure of £200 million for the development costs of
ScienceDirect
is more an indication of corporate inefficiency than of innovation.
Huge
investment by a large corporation is not the best driver of innovation,
especially in the modern connected world. The explosion of the Internet
has shown that open platforms are the real spur for innovation. The
open
standards of the Internet mean that anyone can create a website and
offer
any imaginable online service, and it will be instantly accessible by
all
Internet users world-wide. The result has been an unparalleled wealth
of
innovation, which goes far beyond what proprietary online services had
previously achieved. Open Access to the scientific literature holds the
promise of the same benefits for science. Once the majority of the
scientific
literature is Open Access, in the full sense of being openly
re-distributable
and re-usable, the entire scientific community will be free to develop
and improve techniques to mine and explore that literature. They will
not
be constrained by any one corporate budget or policy, nor by the
barriers
inherent in the current fragmentation of the literature. At this point
in time we can only imagine what is possible, but it is certain that it
will dwarf what any one company might achieve. Scientific integrity is
protected not by copyright law, but by the norms, standards and
processes
of the scientific community. An article is only "stolen" from an author
if it is mis-attributed. This is fraud, and laws other than copyright
deal
with fraud. It is exceptionally rare for a scientific publisher to use
copyright law to defend the integrity of a scientific paper on behalf
of
an author. In fact BioMed Central knows of no situation where this has
happened. The "scientific integrity" argument simply provides a
convenient
excuse, which is used by traditional publishers to attempt to justify
their
requirement for transfer of copyright. Meanwhile, the real reason for
copyright
transfer is clear. Publishers regularly use copyright law to protect
the
profits they derive by controlling access to the literature. For
example,
in ongoing litigation, Elsevier and Wiley are suing various US
photocopying
firms for, amongst other things, including copies of research articles
in student course-packs without paying royalties to the publisher.
The new National Institutes of Health (NIH) policy, which comes into
effect on 2 May 2005, requests that authors whose research was funded
by
the NIH submit copies of their papers to the agency's National Library
of Medicine (NLM) after they are accepted for publication. The papers
will
then be placed in an online archive. Authors can decide when the papers
are made available to the public, but the NIH would like this to happen
as soon as possible, and in any case within 12 months of publication.
Advocates
of full open access to scientific literature are unhappy that the
policy
relies on voluntary participation from authors, and that it does not
require
public access within 6 months of publication. It would put researchers
in the difficult position of having to negotiate between the NIH, which
wants researchers to make their work available as soon as possible, and
journals, which may want researchers to wait. Publishers and societies
that draw income from publishing have also criticized some aspects of
the
policy. They object to the NIH's plan to archive papers on its own
site,
instead of simply directing the public to journal websites, branding it
a waste of public money. NIH officials estimate that the archive will
cost
between $2 million and $4 million a year to run. Types of open access
to
scientific literature :
type
description
examples
home page
faculty research paper hosted on personal or department
home page
support of open-access journals and the development of
publishing resources
contributed by member institutions
German Academic Publishers Project
The subscription cost of commercial journals rose by 224% from 1988 to
1998, according to the Association of Research Libraries — although
nonprofit
publishers have also contributedref.
One could also argue that the crisis is due in part to the failure of
institutions
to increase their acquisition budgets at a time of substantial growth
in
the number and size of journals. For example, in 1960, the field of
economics
was served by some 30 journals, almost all of which were nonprofit
ventures;
by 2000, there were 300 economics journals, 2/3 of them from commercial
pressesref.
Growth
within the academy and pressure on faculty members to publish have
led to the expansion of journal offerings, which, in turn, has sparked
demands for libraries to purchase them. It has been estimated that the
average scientific article costs $3,000 to publish (Frank M, Reich M,
Ra'anan
A. A not-for-profit publisher's perspective on open access. Serials Rev
2004;30:281-7). Higher costs are associated with greater rigor and
selectivity
of the peer-review process, as well as with higher levels of technical
review and copy editing. Such costs are traditionally recovered through
institutional subscriptions, as well as from advertising, fees for
author
submissions and color figures, and reprint sales. The more extreme
advocates
of open access believe that the scientific literature should be free to
the reader, but there is a cost associated with publishing. The
question
thus becomes how to recover this cost in a way that satisfies the need
for access. PLoS Biology, an open-access journal published by the
Public
Library of Science, has solved this problem. Its authors pay a $1,500
fee
to have an article published, but this charge is a fraction of the real
cost of publication. The remainder is covered by foundation grants from
supporters of open access and by institutional membership fees (similar
to subscription fees). It is unlikely, however, that sufficient
philanthropy
exists to make up the difference between $1,500 and the true cost of
publication
for the > 5000 journals indexed by PubMed. Consequently, in a world
in
which authors pay to publish, most journals will have to ask authors to
contribute the full cost of publication, which for many will be >
$3,000.On
May 2, 2005, the National Institutes of Health (NIH) initiated a
program
to provide the public with access to the research of the investigators
it supports. The agency asked NIH-funded authors to deposit their
peer-reviewed
manuscripts voluntarily into PubMed Central (a full-text repository)
within
12 months after journal publication. The original plan required that
published
articles be deposited after only 6 months, but it was modified in
response
to public comments and the recognition that this requirement could have
a deleterious effect on niche journals and quarterly publications. The
NIH asserted that its policy would avert the need for journals to move
from subscriptions to "author-pays" publishing. Efforts are now under
way,
however, to make deposit mandatory within 6 months and require that
grantees
deposit the final published copy of their articles. Although these
changes
would limit the confusion caused by the existence on PubMed Central of
clinically relevant manuscripts that have not undergone copy editing
and
technical review, they would also negatively affect journals whose
articles
report predominantly NIH-funded research and those that serve niche
fields
and are published quarterly, limiting their ability to recover costs
through
subscription revenue. The ready availability of content on PubMed
Central
could lead to subscription cancellations and accelerate the transition
to an author-pays publishing model, the economic implications of which
are not adequately evaluated by John Willinsky, the principal
investigator
of the Public Knowledge Project at the University of British Columbia
(Willinsky
J. The access principle: the case for open access to research and
scholarship.
Cambridge, Mass.: MIT Press, 2006). A study at Cornell University
estimated
that author-pays publishing would increase that institution's expenses
by $1.5 million annuallyref.
If,
in order to survive, journals had to ask authors to pay the full cost
of publication, a portion of NIH grant funds would have to be diverted
each year to cover the cost of making grantees' 65,000 articles free to
readers. Spending some $200 million in support of open access should
give
Congress pause, particularly since the NIH budget has been cut this
year
for the first time in 36 years. At a time of shrinking budgets for
biomedical
research, does it make sense to spend scarce dollars on publication
costs
instead of on research to develop treatments and cures for disease?
Willinsky
makes the case for access to research literature as a public good, but
the advancement of medical knowledge through research is also a public
good. When there is not enough money to go around, the question facing
us is this: How should we decide which public good is preferable?
HubMed : pubmed
rewired (RSS
feeds of literature queries - updated daily)
The PubMed Text
Version will work with your handheld, but try PubMed
for Handhelds which in addition to PubMed searching, offers
PubMed?s
Systematic Reviews subset and the Clinical Queries filters. The URL for
mobile phones with WAP browsers is: http://pubmedhh.nlm.nih.gov/indexw.html.
Another
interface for handhelds, PubMed
on Tap, is a product being evaluated and runs on the Palm and
PocketPC
operating systems. Both of these products are free to use. User
feedback
is encouraged.
Index Medicus : a monthly publication of the National
Library of
Medicine in which the world's leading biomedical literature is indexed
by author and subject
Quarterly Cumulative Index Medicus : a former publication
of the
American Medical Association, in which was indexed most of the medical
literature in the world; replaced by ...
Cumulated Index Medicus : an annual publication of the
National
Library of Medicine, comprising the twelve monthly issues of the Index
Medicus.
Textpresso is a new
text-mining
system for scientific literature whose capabilities go far beyond those
of a simple keyword search engine. Textpresso's 2 major elements are a
collection of the full text of scientific articles split into
individual
sentences, and the implementation of categories of terms for which a
database
of articles and individual sentences can be searched. The categories
are
classes of biological concepts (e.g., gene, allele, cell or cell group,
phenotype, etc.) and classes that relate 2 objects (e.g., association,
regulation, etc.) or describe one (e.g., biological process, etc.).
Together
they form a catalog of types of objects and concepts called an ontology.
After
this ontology is populated with terms, the whole corpus of articles
and abstracts is marked up to identify terms of these categories. The
current
ontology comprises 33 categories of terms. A search engine enables the
user to search for one or a combination of these tags and/or keywords
within
a sentence or document, and as the ontology allows word meaning to be
queried,
it is possible to formulate semantic queries. Full text access
increases
recall of biological data types from 45% to 95%. Extraction of
particular
biological facts, such as gene-gene interactions, can be accelerated
significantly
by ontologies, with Textpresso automatically performing nearly as well
as expert curators to identify sentences; in searches for two uniquely
named genes and an interaction term, the ontology confers a 3-fold
increase
of search efficiency. Textpresso currently focuses on Caenorhabditis
elegans literature, with 3,800 full text articles and 16,000
abstracts.
The lexicon of the ontology contains 14,500 entries, each of which
includes
all versions of a specific word or phrase, and it includes all
categories
of the Gene Ontology database. Textpresso is a useful curation tool, as
well as search engine for researchers, and can readily be extended to
other
organism-specific corpora of textref
International Amedeo Literature
Program
by Bernd Sebastian Kamps
: all services are free of charge. This policy was made possible thanks
to generous unrestricted educational grants provided by AMGEN,
AstraZeneca,
Berlex, Boehringer Ingelheim, Novartis, Pfizer, Roche, Schering AG.
Open
text
mining interface (OTMI) was first presented at the
Life Sciences Conference and Expo in Boston in April 2006. The proposal
would make coded text freely available to all. If all publishers were
to
adopt this or some similar standard, the entire literature would become
accessible for mining. OTMI is a software that explores open 'text
bases',
especially the PubMed database. They scan many publications in order to
discover relationships based on phrases or sentences that, when
analysed
in combination, cumulatively link one object (such as a disease) to
another
(such as a molecule)
At the University of California, Berkeley, the BioText
project is being used to explore apoptosis, for example
At the University of Illinois in Chicago, the Arrowsmith
software explores the causes of disease
At the European Bioinformatics Institute near Cambridge, UK, the EBIMed
retrieval engine explores protein–protein interactions
Google's web search manages to
make
the most useful references appear at the top of the page using
algorithms
that exploit the structure of the links between web pages. Pages with
many
links pointing to them are considered 'authorities', and ranked highest
in search returns. The ranking is refined by taking into account the
importance
of the origins of links to a paper.
Google Scholar
uses the
citations at the end of each paper, rather than web links. It
automatically
identifies the format and content of scientific texts from around the
web,
extracts the references and builds automatic citation analyses for all
the papers indexed. This approach has been pioneered in computer
science
by ResearchIndex, software
produced by the information technology company NEC.
Much of the peer-reviewed material has been made
available to Google by publishers, including Nature Publishing
Group,
the Association for Computing Machinery and the Institute of Electrical
and Electronics Engineers, through a pilot cross-publisher search
engine
called CrossRef Search. Publishers have arranged for Google robots to
scan
the full texts of their articles. Users clicking on a hit returned by
Google
Scholar are directed to the article on the publisher's site, where
subscribers
can access full text and non-subscribers get an abstract or information
on how to buy an article. Google Scholar has a subversive feature,
however.
Each hit also links to all the free versions of the article it has
found
saved on other sites, for example on personal home pages, elsewhere on
the Internet.
The
Braunwald ImageBank,
edited by Eugene Braunwald, MD, is a collection of thousands of
high-quality
clinical images covering a variety of cardiovascular topics
Biodidac : > 6,000
images
available [free registration required]
Librarian
1.1
is a free tool powered by PHP, Apache and MySQL, for self-creation of
virtual
annotated library of PDF articles, designed for small trusted groups,
e.g.
science labs. Once a PDF file is put into the program, Librarian links
up with the PubMed database and automatically annotates keywords from
the
abstract and citation references. It allows a user to establish a
network
quickly, and then the home PC acts as a server available to any member
of the laboratory. Librarian works on any Windows, Macintosh, or
Unix-based
computer.
the digital
object identifier (DOI) system is an identification system for
intellectual property in the digital environment. Developed by the International
DOI Foundation on behalf of the publishing industry, its goals are
to provide a framework for managing intellectual content, link
customers
with publishers, facilitate electronic commerce, and enable automated
copyright
management. Publishing on the Internet requires new tools for managing
content. Where traditional printed texts such as books and journals
provided
a title page or a cover for specific identifying information, digital
content
needs its own form of unique identifier. This is important for both
internal
management of content within a publishing house and for dissemination
on
electronic networks. In the fast-changing world of electronic
publishing,
there is the added problem that ownership of information changes, and
location
of electronic files changes frequently over the life of a work.
Technology
is needed that permits an identifier to remain persistent although the
links to rights holders may vary with time and place. The network
environment
creates an expectation among users that resources can be linked and
that
these links should be stable. The DOI system provides a way to identify
related materials and to link the reader or user of content to them.
DOI
has wide applicability to all forms of intellectual content and can
therefore
be applied to all forms of related materials, such as articles, books,
classroom exercises, supporting data, videos, electronic files, and so
on. DOI provides a basis for work now in progress to develop automated
means of processing routine transactions such as document retrieval,
clearinghouse
payments, and licensing. Publishers and users are being encouraged to
experiment
with DOI usage, and to commonly develop guidelines for DOI scope and
rules
for usage. The DOI system has 2 main parts (the identifier, and a
directory
system) and a third logical component, a database.
the identifier: the DOI, is made up of 2 components.
the first element -- the prefix -- is assigned to the publisher
by a registration agency. Eventually, there may be multiple
registration
agencies to serve separate geographical regions or for each
intellectual
property sector (such as text publishing, photographs, music, software,
etc.). However, at this stage there is only one registration agency and
Directory Manager. Prefixes all begin with 10 to designate the DOI
directory
manager, followed by a number designating the publisher who will be
depositing
the individual DOIs, which ensures that a publisher can designate its
own
DOIs without fear of creating duplicate numbers. Publishers may choose
to request a prefix for each imprint or product line, or may use a
single
prefix.
the second element, following a slash mark, is the suffix. This
is the designation assigned by the publisher to the specific content
being
identified. Many publishers have elected to use recognized existing
international
standards for their suffixes when such a standard applies to the object
being identified (e.g., ISBN for a book), but may alternatively choose
to use an internal code. In use, the DOI identifier is an opaque string
without intelligent meaning other than as an identifier. The suffix can
follow any system of the publisher's choosing, and be assigned to
objects
of any size -- book, article, abstract, chart -- or any file type --
text,
audio, video, image or software. An object (book) may have one DOI, and
a component within that object (chapter) may have another DOI. The
publisher
decides the level or "granularity" of identification based on the
nature
of objects sold and distributed over the Internet. The suffix can be as
simple as a sequential number or a publisher's own internal numbering
system.
the directory: the power of the DOI system is its function as a
routing or "resolution" system. Because digital content may change
ownership
or location over the course of its useful life, the DOI system uses a
central
directory. When a user clicks on a DOI, a message is sent to the
central
directory where the current web address associated with that DOI
appears.
This location is sent back to the user's Internet browser with a
special
message telling the system to "go to this particular Internet address."
In a split second the user sees a "response screen" -- a Web page -- on
which the publisher offers the reader either the content itself, or, if
not, then further information about the object, and information on how
to obtain it. When the object is moved to a new server or the copyright
holder sells the product line to another company, one change is
recorded
in the directory and all subsequent readers will be sent to the new
site.
The DOI remains reliable and accurate because the link to the
associated
information or source of the content is so easily and efficiently
changed.
The underlying technology used in the DOI system is optimised for
speed,
efficiency, and persistence.
the database: information about the object identified is
maintained
by the publisher. However it is planned that the DOI system will also
collect
some minimum level of associated metadata to enable provision of
automated
efficient services such as look-up of DOIs from bibliographic data,
citation
linking, and so forth. Thus information about the object identified
(metadata)
might be distributed over several databases. It might include the
actual
content or the information on where and how to obtain the content or
other
related data. From these database systems is generated the information
that the user has access to in response to a DOI query, forming the
third
component of the DOI system.
The DOI can also serve as an agent. In the future, the DOI will also be
used to automate transactions. The DOI is being further developed to
incorporate
functionality which could enable the user to associate a function with
the DOI.
Although many clinical journals publish high-quality, clinically
relevant
and important original studies and systematic reviews, the articles
for each discipline studied are concentrated in a small subset of
journals.
This subset varied according to healthcare discipline; however, many of
the important articles for all disciplines in this study were published
in broad-based healthcare journals rather than subspecialty or
discipline-specific
journalsref.
top 20 journals, by number of highly rated papers published in 2000.
(Following
each journal's title is its total number of highly rated papers, and
percentage
of papers that were highly rated.)
Cochrane Database of Systematic Reviews*, 422 (95%)
Lancet 134 (3.5%)
Journal of Clinical Oncology 445 (15.4%)
British Medical Journal, 93 (2.7%)
Circulation, 92 (6.8%)
Journal of Advanced Nursing, 92 (15.1%)
Obstetrics and Gynecology, 88 (18.4%)
JAMA, 87 (4.5%)
New England Journal of Medicine, 83 (5.4%)
Archives of Internal Medicine, 81 (13.1%)
Journal of the American College of Cardiology, 76 (10.7%)
Pediatrics, 76 (9.4%)
American Journal of Cardiology, 72 (8.5%)
American Journal of Obstetrics and Gynecology, 72 (10.2%)
Critical Care Medicine, 70 (7.2%)
Chest, 66 (7.5%)
Stroke, 59 (9.7%)
Neurology, 58 (4.3%)
American Journal of Gastroenterology, 56 (2.8%)
Diabetes Care, 55 (10.4%)
top 5 by category
ACP Journal Club (internal medicine)
4 titles supplied 56.5% of the articles
New England Journal of Medicine, 25 (16.9%) (in the by-category tables,
the criterion used for highly-rated papers was stricter)
JAMA, 25 (16.9%)
Lancet, 22 (14.9%)
Cochrane Database Systematic Reviews*, 11 (7.4%)
27 titles supplied the other 43.5%.
Annals of Internal Medicine, 8 (5.4%)
evidence-based medicine (general/family practice),
5 titles supplied 50.7% of the articles
JAMA, 18 (12.5%)
BMJ, (11.8%)
Lancet,17 (11.8%)
New England Journal of Medicine, 13 (9.0%)
Cochrane Database of Systematic Reviews*, 8 (5.6%)
40 titles supplied the remaining 49.3%
evidence-based nursing (general practice nursing),
7 titles supplied 51.0% of the articles
Qualitative Health Research, 10 (10.4%)
Cochrane Database of Systematic Review*, 8 (8.3%)
Pediatrics, 8 (8.3%)
JAMA, 7 (7.3%)
Lancet, 6 (6.3%)
34 additional titles supplied 49.0%.
evidence-based mental health
9 titles supplied 53.2% of the articles
Archives of General Psychiatry, 12 (12.5%)
Cochrane Database of Systematic Reviews*, 6 (6.3%)
American Journal of Psychiatry, 5 (5.2%)
British Journal of Psychiatry, 5 (5.2%)
JAMA, 5 (5.2%)
34 additional titles supplied 46.8%.
*A database that publishes systematic quarterly reviews of the
literature.
For purposes of this study it was considered a separate journal.
For the disciplines of internal medicine, general/family practice,
and mental health (but not general practice nursing), the number of
clinically
important articles was correlated with Science Citation Index (SCI)
Impact
Factors.
Aggregators : services that provide
a
single point of online access to multiple full-text publications,
aggregate
online access to the publications of multiple publishers and/or
providers.
TORPEDO Ultra by
Naval
Research Laboratory, Ruth H. Hooker Research Library is a local
repository
of over two million full text publications from 12 separate publishers.
Access to any part of this collection is restricted to authorized users
only
Connotea by Nature
Publishing Group
[free registration required] is a place to keep links to the
articles you read and the websites you use, and a place to find them
again.
It is also a place where you can discover new articles and websites
through
sharing your links with other users. By saving your links and
references
to Connotea they are instantly on the web, which means that they are
available
to you from any computer and that you can point your friends and
colleagues
to them. In Connotea, every user's bookmarks are visible both to
visitors
and to every other user, and different users' libraries are linked
together
through the use of common tags or common bookmarks. You can save links
to any online content, but there is special functionality for articles
from Nature journals, PubMed, HubMed or D-Lib Magazine — Connotea
recognises
URLs from these sites and imports the bibliographic information for the
article you bookmarked. You can bookmark pages and papers by clicking
on
add and copying-and-pasting the URL into the form there. However, the
easiest
way of saving a link is to use the Connotea bookmarklets while you're
browsing.
You can organise your collection of references and websites by simply
assigning
tags (which you can think of as categories or labels) to your
bookmarks.
You can assign as many tags as you want to a bookmark, and they can be
almost anything you like. However, be aware that if you want to use a
phrase
(such as "cladistic analysis") as a tag, you should use quotes to mark
out the phrase, otherwise Connotea will assume that you're assigning
two
separate tags. Because tags are simply words, other users will end up
using
the same tags as you. This is an interesting way of finding related
content
— if you click on one of the tag names underneath a bookmark you'll be
taken to a page that lists all the links that have been assigned that
tag
by all users. There's also a list of related tags on the right hand
side
of every list of bookmarks. This shows all the tags that have been used
for those bookmarks, either by you or by other people. Clicking on
those
tag names is another way of finding related content. If more than one
user
has bookmarked a link, there will be a 'and n others' link underneath
it.
Clicking on that will show you a list of all the users who have
bookmarked
it, and a list of the tags they used for it. You can also browse
another
user's library by clicking on their username.
CiteULike is a similar
online academic
bookmark management service based on del.icio.us
Personal digital
assistants
(PDAs) are capable of changing how health care is delivered in the
future, since they aim to merge and integrate this functionality in one
device that is versatile, customisable, and portable. According to
polls,
the worldwide PDA market had 10.5 million devices in 2003. Clinicians
are
rapidly adopting PDAs into their daily practice (Medicine on the move.
PDAs and tablet PCs make the rounds with doctors and nurses. Postgrad
Med
2004; 115 (suppl): i-vi). In one study, > 50% of all doctors younger
than
35 years in developed countries used a PDA in 2003ref.
In
a survey from the University of California (San Francisco, CA, USA)
40–50% of all US physicians and junior doctors (also referred to as
residents
in the USA) use or can use a PDAref.
In
2005, the proportion of US doctors using PDAs is expected to be well
above 50% and rising. This Review provides an overview of current PDA
technologies,
applications relevant to medical education and clinical practice, a
guide
to medical software, safety and security, a personal perspective,
current
limitations,
and a future outlook. In the past, hand-held computing was restricted
to
sophisticated programmable calculators with or without a data storage
option.
By comparison, most PDAs currently run on the mobile operating systems
of either Palm OS (PalmSource Inc, Sunnyvale, CA, USA) or Microsoft
Windows
(Microsoft Corp, Redmond, WA, USA) that, in addition to their intrinsic
functionality, allow customisation by the installation of third-party
software
applications. Furthermore, some Palm OS or Windows mobile-based PDAs
have
a Java (Sun Microsystems, Santa Carla, CA, USA) runtime that allows the
use of platform-independent, Java-based applications. Other platforms
such
as Newton (Apple Computer, Cupertino, CA, USA), Psion (Psion Teklogix,
Mississauga, ON, Canada), BeOS (PalmSource Inc), Symbian OS (Symbian,
London,
UK), and Blackberry (Research in Motion, Waterloo, ON, Canada),
currently
have no major role in the health-care market. In 1996, Palm Inc
introduced
the Pilot 1000 and Pilot 5000 products running the Palm OS operating
system
(PalmSource Inc) that led the resurgence of hand-held computing. In
1999,
the company added advanced wireless communications capabilities to the
Palm OS platform to address the demand for mobile information
appliances.
Their company policy to provide registered developers with access to
the
source code of the Palm operating system led to the development of more
than 40000 software applications, to run on > 36 million Palm OS
devices
sold, unmatched by any other hand-held operating system so farref1,
ref2.
Microsoft Windows mobile is Microsoft's most recent operating system
for
hand-held devices. Its source code is proprietary and only available to
professional-device and software manufacturersref.
Although
Palm Inc still markets its own line of devices directly, both
Palm OS and Windows mobile-based PDAs and smartphones (devices with a
mobile
phone and PDA combined) are also designed, manufactured, and
distributed
by several major computer manufacturers. PDAs are shirt-pocket-sized
devices
with a touch-sensitive screen, a dedicated input area or keyboard,
customisable
application buttons, and a multiway (button or mini joystick) navigator
to browse information on the screen. Depending on the brand and model,
some devices feature an expansion slot for memory cards or accessories,
a built-in camera, headphone jacks, speaker, microphone, ports for
infrared,
Bluetooth, or Wi-Fi (Wireless Fidelity), and even built-in GPS (global
positioning system) receivers. PDAs are now generally equipped with a
comprehensive
suite of personal information management software or the option to
integrate
with common brands of such software, note-taking applications, and
contact
databases. PDAs can connect to desktop computers and wireless local
area
networks (W-LAN) using infrared, Bluetooth (first developed by
Telefonaktiebolaget
L M Ericsson, Stockholm, Sweden, now Bluetooth Special Interest Group
[SIG],
Delaware, DE, USA), or Wi-Fi communication technology. The desktop
synchronisation
software or additional add-on applications provide compatibility with
popular
office file formats. Most devices feature an e mail application to
integrate
with current office suites, which allows users not only to carry
critical
files when travelling, but also to synchronise important files quickly
and easily between desktop and hand-held devices. Smartphones enhance
the
basic PDA functionality with wireless communication properties,
including
instant messaging, e mail, web browsing, data synchronisation with
remote
servers and networks, and even video conferencing, if used in the
coverage
of commercial cellular telephone networks. Basic PDA functionality of a
sample main application screen on Palm OS 6.1 :
Physicians, nurses, dieticians, medical students and trainees, and
other health-care professionals must review an ever-increasing amount
of
constantly changing information about their patients several times a
day
and correlate the data with the most recent diagnostic and therapeutic
recommendations and management options to make sound decisions.
Traditionally,
health-care professionals consulted meticulously collected personal
notebooks
and article cut-outs, white-coat-pocket manuals, subscription journals,
medical reference books, or electronic references on desktop computers.
The wealth of information and its constant changes due to the
accelerated
pace in translational research in biomedical science mean that these
traditional
resources are very difficult to keep up to date. Fast approval and
propagation
of newly discovered therapies by regulatory agencies such as the FDA
(US
Food and Drug Administration) or EMEA (European Medicines Agency) can
also
lead to more frequent recalls of drugs, medical products, and devices
(as
well as newly issued warnings); labelling changes; and novel
interactions
with existing compounds. Additionally, with the advent of overzealous
documentation,
coding, and billing requirements in managed care, constantly overworked
health-care professionals cause an increasing number of treatment and
management
errors, because the time available to spend with patients is sadly
diminishing.
PDAs can help to overcome some of these problems. The education of
medical
students now relies heavily on computer technology, beginning with the
replacement of animal experiments by computer simulations in basic
science
laboratories, multimedia study programmes and exercises, and the
abolition
of paper-and-pencil board examinations for fully computerised systems
in
the USA and other countries. PDAs fit very well with these concepts,
and
the fact that medical students were among the earliest adopters of PDA
use is unsurprisingref.
Many
medical schools require students to acquire basic clinical skills
in clerkships. Faculty staff and students generally complete lengthy
assessment
forms at the end of the respective rotation, which do not always allow
for a timely feedback and balanced learning experience. Electronic
records
of patient encounter and procedure logs maintained by the students on
their
PDAs, which are synchronised with either a central database or the
mentor's
desktop system, provide an interesting new approach. This concept has
been
assessed by several academic medical centres for rotations in internal
medicine, family medicine, and emergency medicine in surveys. Medical
students
thought the logs were convenient to use. This system generally
increased
the number of patient encounters and recorded diagnoses, helped improve
history-taking skills by alerting students to under-addressed issues
such
as women's health, improved overall computer literacy, allowed to
immediately
identify large gaps in basic clinical skills, and provided an easy
mutual
feedback with faculty staff during clinical clerkshipsref1,
ref2,
ref3,
ref4
(Lee JS, Sineff SS, Sumner W. Validation of electronic student
encounter
logs in an emergency medicine clerkship. Proc AMIA Symp 2002; 425-429;
Denton GD, Williams RW, Pangaro L. Core problems reported by students
in
a palm OS and Internet-based problem entry system predicts performance
on the third-year internal medicine clerkship. AMIA Annu Symp Proc
2003;
827; Bakken S, Sheets CS, Curtis L, Soupios M, Curran C. Informatics
competencies
pre-and post-implementation of a Palm-based student clinical log and
informatics
for evidence-based practice curriculum. AMIA Annu Symp Proc 2003;
41-45).
The early use of a clinical management approach to evidence-based
medicine
is a worthy goal in undergraduate medical education. 2 studies were
undertaken
to investigate whether PDAs could assist this approach at the point of
care. In both studies, medical students were given PDAs preloaded with
either university-developed clinical-decision support software (CDSS)
or
a bundle of commercial-decision support applications commonly used by
clinicians.
Multivariable regression analysis showed that improved perceived
usefulness
of PDAs with CDSS was associated with supportive faculty attitudes,
good
knowledge of evidence-based medicine, enhanced computer literacy
skills.
Greater satisfaction with the CDSS than with commercial-decision
support
devices was associated with increased use in a clinical setting and
improved
success in search ratesref.
In
the second study, pre-orientation and post-orientation questionnaires
and a post-rotation assessment measured students' comfort levels, and
the
perceived usefulness of PDAs with CDSS and ratings of programmes on
their
PDAs were analysed. PDAs almost always enhanced the clerkship
experience,
although the outcome measures were not as clearly defined as those in
the
first studyref.
The
education effectiveness of evidence-based-medicine learning was
investigated
objectively in a randomised controlled trial, in which students' use of
a PDA with CDSS was compared with the use of a pocket card containing
guidelines
and controls. Main outcome measures were factored and individual item
scores
from a validated questionnaire on personal, current, and future use of
evidence-based medicine; use of evidence during and after the clerking
of patients; frequency of discussions on the role of evidence during
teaching
rounds; and self-perceived confidence in clinical decision-making. The
PDA showed significant improvements in all outcome scores, with the
largest
change in students' educational experience with evidence-based
medicine.
No substantial deterioration was seen in the improvements even after
the
withdrawal of PDAs during an 8-week washout period, which suggested at
least short-term sustainability of PDA effectsref.
PDAs
can also assist in telementoring and multimedia learning. 2 studies
have shown the feasibility of live wireless transmissions of
laparoscopic
surgical procedures to PDAs. One of these studies also compared the
recognition
of anatomical landmarks on PDA screens with that of standard computer
monitors
during the procedure and showed significant improvementsref1,
ref2.
PDAs
could also help enhance the classroom learning experience. In a pilot
study, a histology class teacher polled the students about
effectiveness,
student interest, and comprehension with Bluetooth-equipped PDAs.
End-of-class
survey results indicated that students were enthusiastic about the
polling
deviceref.
Overall,
current data lend support to the potential usefulness of PDAs
in medical education. However, large randomised controlled trials with
comparisons of PDA with non-PDA groups and with objective outcome
measures,
such as performance in in-house or board examinations, are needed to
substantiate
these early observations. Another important aspect of hand-held
computer-assisted
learning is the integration of faculty staff, who are traditionally
more
reluctant to adopt new technology than studentsref.
Several
programmes for junior doctors at leading US academic institutions
(such as Harvard Medical School, Boston, MA; Columbia College of
Physicians
and Surgeons, New York, NY; or Georgetown University Medical School,
Washington,
DC), have been early adopters of hand-held computers and provide their
junior doctors with PDAs and software bundles. Training programme
accreditation
authorities and medical specialty boards demand an ever-increasing
documentation
of patient exposure and procedural performance, to maintain and improve
training standards. Apart from log cards, no simple and reliable
mechanisms
currently exist for directors of junior doctor programmes to assess how
well their trainees are being exposed to teaching in their specialties
and what curriculum weaknesses need to be addressed. Several studies in
specialties such as anaesthesia, emergency medicine, family practice,
general
surgery, internal medicine, neurology, obstetrics and gynaecology,
radiology,
and urology, have demonstrated the usefulness of PDAs to simplify data
collection and assess doctor and programme performanceref1,
ref2,
ref3,
ref4,
ref5,
ref6
(Sequist TD, Singh S, Pereira A, Pearson SD. On Track: a database for
evaluating
the outpatient clinical experience of internal medicine residency
training.
AMIA Annu Symp Proc 2003; 1002). A larger survey in junior doctors of
six
training programmes in family practice, internal medicine, neurology,
paediatrics,
radiology, and surgery concluded that, as advantages, many junior
doctors
readily adapted their personal organisers to help keep track of their
clinical
tasks and keep in touch with patients, and that commercial medical
references
were used most by the surveyed residents to answer immediate medical
questions.
The perceived drawbacks included: calculators and patients' trackers
were
not clearly able to be tailored to residents' needs (eg, to restrict
and
modify types of calculations to just those actually used), the physical
size (both too small for display and too bulky overall), and several
junior
doctors mentioned a concern of becoming too dependent on one source of
information, which was viewed as being too easy to lose or break. PDAs
were widely used across the spectrum of specialties, irrespective of
encouragement
by the training programmeref.
PDAs
can also assist in assessing the performance of clinical educators
and students in objective structured clinical examinations (OSCE)ref
(McGowan JJ, Bangert M, Ballinger SH, Highbaugh S. Implementing
wireless
evaluation in a hospital-based OSCE center. AMIA Annu Symp Proc 2003;
930).
The available data suggest the potential usefulness of PDAs in junior
physician
education. However, as concluded for medical student education, larger
randomised controlled trials and surveys are needed to compare
PDA-assisted
training with traditional training in institutions and specialties by
use
of objective outcome measures, such as performance in in-house or board
examinations, to define the role of PDAs in postgraduate medical
education.
PDAs are widely used among health-care professionals across all major
specialties.
A study of 2130 paediatricians selected randomly from the American
Medical
Associations' Physician Masterfile (American Medical Association,
Chicago,
IL, USA) aimed to calculate the percentage of paediatricians using
PDAs,
deduce the perceived strengths and weaknesses of PDAs, and explore
characteristics
associated with beliefs and use. The most commonly used applications
were
for drug reference (80%), followed by scheduling (67%), medical
calculations
(61%), prescription writing (8%), and billing (4%). PDA users were
significantly
more likely to be male, come from an urban community, have recently
graduated
from medical school, and work in non-private practice. Users were also
more likely to believe that PDAs could reduce medical error, but often
complained about memory capacity, although small screen size and system
speed were not problemsref.
With
35–40% of respondents using a PDA, this study is a good example for
mainstream hand-held computer use by physicians in many clinical
specialties.
PDAs can replace bulky drug reference books and help with the selection
and comparison of drugs, identification of dosing schedules, and dose
adjustment
when drug excretion is impaired. A major advantage of PDA use over
paper-based
drug references are drug interaction checks and—if updated
(synchronised)
with an institutional or commercial server regularly—the most
up-to-date
drug information and immediate access to alerts or recalls from
regulatory
or government agencies, such as the FDA or CDC (Centers for Disease
Control,
Atlanta, GA, USA). The usefulness of PDA-based drug references,
including
parenteral nutrition, blood products, and chemotherapy, and drug
interaction
checks has been established in several different studiesref1,
ref2,
ref3,
ref4,
ref5,
ref6,
ref7.
The
effect of PDA use on medication safety can be even greater if use is
extended to nursing staff and combined with patient identification
systems.
To improve patient safety in hospitals by reducing drug treatment and
treatment
errors, the FDA has published a final rule about bar code label
requirements
for human drugs and biological products, in February, 2004ref.
Bar
codes are now required on most prescription drugs, blood, blood
products,
and specific over-the-counter drugs. This system begins when a patient
is admitted to the hospital. The hospital gives the patient a bar-coded
identification bracelet to link to his or her computerised medical
record.
As required by the FDA rule, most prescription drugs and specific
over-the-counter
drugs would have a bar code on their labels. The health-care team uses
PDA-based bar-code scanners that are linked to the hospital's computer
system of electronic medical records. Before a health-care worker gives
a drug to the patient, the health-care worker scans the patient's bar
code,
which allows the computer to access the patient's computerised medical
record. The health-care worker then scans the drug that the hospital
pharmacy
has provided for treatment. This scan informs the computer which drug
is
being given. The computer then compares the patient's medical record
with
the drug being given to ensure that they match. Therefore some of the
following
problems (unfortunately not uncommon) could be easily avoided: wrong
patient,
wrong dose of drug, wrong drug, and wrong time to administer the drug.
The technology is available and has already been implemented in some
multisite
facilities in the USA with some successref1,
ref2,
ref3,
ref4,
ref5,
ref6.
Daily
writing of progress notes with patients' data interpretation,
management
plans, and coding of medical treatments and procedures are crucial
clinician
responsibilities. However, the quality and legibility of notes are
often
inadequate. The following studies illustrate how the quality of medical
records can be enhanced with PDA use. In a paediatric critical-care
unit,
researchers recorded documentation discrepancies in 60% of
daily-progress
notes. Therefore, they undertook a before-and-after trial to determine
whether a point-of-care, PDA-based patient record and charting system
could
reduce discrepancies in progress note documentation by junior doctors
in
a neonatal intensive-care unit. They recorded significantly fewer
documentation
discrepanciesref.
Another
randomised study investigated whether hand-held computer-based
documentation could improve both the quantitative and qualitative
aspects
of medical records in orthopaedic surgery. The electronic documentation
consisted of a specially designed software package on a hand-held
computer
for bedside use with structured decision trees for examination, access
to a history, and coding. In the control group, chart notes were
compiled
on standard paper forms and were subsequently entered into the
hospital's
information system. The number of documented ICD (International
Classification
of Diseases) diagnoses was the primary endpoint for sample size
calculations.
All patients' charts were reread by an expert panel, which assigned
quality
ratings to the different documentation systems by scrutinising the
extent
and accuracy of patients' histories and physical findings assessed by
daily
chart notes. Documentation with the hand-held computer significantly
increased
the median number of diagnoses per patients from four to nine, but it
produced
some over-coding for false or redundant items. Documentation quality
ratings
improved significantly with the introduction of the hand-held device
with
respect to the correct assessment of a patient's progress and
translation
into ICD diagnoses. Various learning curve effects were recorded with
different
operators (Stengel D, Bauwens K, Walter M, Kopfer T, Ekkernkamp A.
Comparison
of handheld computer-assisted and conventional paper chart
documentation
of medical records. A randomized, controlled trial. J Bone Joint Surg
Am
2004; 86-A: 553-560). These findings were confirmed by another
orthopaedic
surgery study in outpatientsref.
A
study among anaesthesiologists investigated their experience of using
acute pain assessment software on a PDA for patient management. PDA
assessments
were more likely to contain documentation regarding pain and
side-effects
than paper assessments. The median time of the assessment period during
the patient encounter was longer with the PDA than with paper; however,
the median period for the total encounter time (chart review,
assessment,
documentation) was significantly shorter with the PDA than with paperref.
The
battle between health insurers and physicians about claims is not overref.
Claims
are frequently denied or delayed on technicalities such as over-coding
or under-coding, which PDA use could help in the future. Many
clinicians
have difficulty determining the appropriate code for current procedural
terminology (CPT) or evaluation and management (E&M) to assign to
the
type and intensity of patient care they provide. Several surveys
reported
PDA-based charge capture and billing programmes were more accurate than
paper. The reimbursement advantage was estimated to be 20%ref1,
ref2,
ref3.
Quality
assessment and outcomes research in large medical associations
require the acquisition, analysis of, and response to point-of-care
data.
Although most hospitals now process much of their clinical and
administrative
data electronically, data acquisition from the actual care providers
and
patients during encounters are still accomplished with an intermediate
paper process. PDAs have the potential to simplify and accelerate this.
Several studies, particularly in procedure-oriented specialties, have
shown
feasibility and measurable benefits of PDA-based data collection,
because
they allowed the quick modification of the study design, rapid data
acquisition,
and processing, to enable immediate effect of the results on clinical
and
administrative daily practice. This type of data collection increased
performance
almost instantly. Data were obtained with PDAs from either providers or
patients to assess patient-perceived outcomesref1,
ref2,
ref3,
ref4,
ref5
(Astrahan MA. HDR quality assurance methods for personal digital
assistants.
Med Dosim 2004; 29: 166-172). Quality of care can be improved with the
implementation of CDSSref,
evidence-based
medicineref,
or
other critically appraised publications and with alerting systems in
hand-held computers. In a survey of 1538 health-sciences faculty staff
and junior doctors, most responders indicated that they would like to
learn
more about clinical resources for PDAsref.
Although
many health-care professionals already rely on various sources
of medical reference applicationsref,
their
effect on the quality of care is currently under-explored. Pilot
studies in which users either assessed an interface to access
institution-provided,
critically appraised topics or headlines delivered to their PDAs
alerting
them to new books, National Guideline Clearinghouse guidelines,
Cochrane
reviews, and National Institute of Health (NIH) Clinical Alerts, as
well
as updated content in UpToDate (UptoDate, Waltham, MA, USA), Harrison's
Online (McGraw Hill, Princeton, NJ, USA), Scientific American Medicine
(now renamed ACP Medicine; American College of Physicians,
Philadelphia,
PA, USA), and Clinical Evidence. Participants could request additional
information for any of the headlines, and the information that was
delivered
via e mail during their next synchronisation was perceived as helpfulref1,
ref2,
ref3,
ref4.
Example
of PDA-based software for clinical decision support. MedCalc sample
equation to calculate the Ranson's score to assist in the management of
acute pancreatitisref
:
Example of a guideline reference application :
Sample screen of the Shots 2005 application, developed by the Society
of Teachers of Family Medicine and based on guidelines of the CDC
National
Immunization Programref.
Reproduced
with permission of Dr Richard K Zimmerman, University of Pittsburgh,
PA, USA on behalf of the Group of Immunisation Education, Society of
Teachers
of Family Medicine. The Lister Hill National Center for Biomedical
Communications
(Bethesda, MD, USA), a research and development division of the
National
Library of Medicine (NLM) of the NIH, has undertaken a project to
discover
and implement design principles for point-of-care delivery of clinical
support information. PubMed on Tap is an application for PDAs that
retrieves
MEDLINE citations directly from the PDA through a wireless connection
to
the internet. PubMed on Tap features include several PubMed search
options,
a history of previous queries, the ability to save citations to an
electronic
memo pad, two clustered results options, and links to journal websitesref.
The
National Cancer Institute (NCI; Bethesda, MD, USA), another NIH branch,
has also recognised the need for new information delivery methods and
is
currently undertaking a research study that investigates how
health-care
professionals use cancer information on hand-held wireless devices. The
AvantGo Enterprise 4.2 Solution (iAnywhere Solutions, Dublin, CA, USA)
provided the platform to deliver the website content of NCI's cancer
information
service (CIS) onto hand-held devices. Several obstacles still need to
be
overcome before this service will be available to the general publicref.
Other clinical settings where PDA-based decision support devices have
been
reported to be useful or advantageous include: emergency and mass
casualty
triage, data management of transplantation patients, management of
patients
with stroke, infection control, and enforcement of
institution-specific,
rational antibiotic useref1,
ref2,
ref3,
ref4,
ref5,
ref6.
(Ray
HN, Boxwala AA, Anantraman V, Ohno-Machado L. Providing
context-sensitive
decision-support based on WHO guidelines. Proc AMIA Symp 2002; 637-641;
Quaglini S, Caffi E, Boiocchi L, Panzarasa S, Cavallini A, Micieli G.
Web-based
data and knowledge sharing between stroke units and general
practitioners.
AMIA Annu Symp Proc 2003; 534-538; Chang P, Hsu Y, Tzeng Y, Hou IC,
Sang
YY. Development and pilot evaluation of user acceptance of advanced
mass-gathering
emergency medical services PDA support systems. Medinfo 2004;
1421-1425).
Although these concepts undoubtedly have potential, no study so far has
compared this approach with existing methods of information delivery or
performance of users in board examinations or re-certifications. Most
patients
feel comfortable with their physicians using a PDA in daily clinical
practice
(Houston TK, Ray MN, Crawford MA, Giddens T, Berner ES. Patient
perceptions
of physician use of handheld computers. AMIA Annu Symp Proc 2003;
299-303).
However, their use is not restricted to health-care providers. PDAs can
serve as electronic patient diaries and prediction devices in diseases
that are intermittently flaring, such as asthma or urticaria. The
successful
use of PDAs in diabetes care to improve glycaemia in patients with
insulin
pumps has been reported. PDAs can also help migraine patients to
predict
attacksref1,
ref2,
ref3,
ref4,
ref5,
ref6
(Kerkenbush NL. A comparison of self-documentation in diabetics:
electronic
versus paper diaries. AMIA Annu Symp Proc 2003; 88. Kwak M, Han SB, Kim
G, et al. The knowledge modeling for chronic urticaria assessment in
clinical
decision support system with PDA. AMIA Annu Symp Proc 2003; 902). The
new
use of PDAs in patients has also been recognised by government agencies
such as the US Public Health Service (USPHS), which released an
interactive
programme for Palm PDAs to help patients quit smoking. The programme is
distributed through the Agency for Healthcare Research and Quality
(AHRQ)
and is available on their website.ref
In addition to these professional applications, the internet is replete
with software of the fitness, wellness, and personal health-care
categories,
such as menstrual calendars, diet, weight, calorie, and workout
management
applications, among others. PDAs could help patients with brain
dysfunction
or injury as cognitive-behavioural orthosesref1,
ref2.
A
frequent outcome in these patients is memory impairment. One group of
researchers designed and tested a mobile-distributed care system in a
cognitive
neurology day-care clinic of an academic medical centreref.
A
PDA-based speech synthesiser for speech-impaired patients has also been
reportedref.
With
an extended bandwidth of cellular telephone networks (eg, universal
mobile telecommunications system or UMTS) and high-speed institutional
wireless networks, teleradiology on hand-held computers may become a
reality.
Pilot studies have shown promising data, such as CT scans that have
been
transmitted in the industry standard format of DICOM (digital image and
communications in medicine) and that have been assessed remotely by
radiologists.
Echocardiograms have also been successfully read on PDAsref1,
ref2
(Raman B, Raman R, Raman L, Beaulieu CF. Radiology on handheld devices:
image display, manipulation, and PACS integration issues. Radiographics
2004; 24: 299-310). International, randomised, multicentre clinical
trials
usually need the collection, storage, and processing of large amounts
of
data. Data collection by investigators and study coordinators is
traditionally
done with specifically designed paper forms in clinical research files
or complex telephone interview systems. Most trials also need the
repeated
completion of patient questionnaires to calculate standardised disease
activity or quality-of-life scores. Unfortunately, paper-based,
self-administered
instruments remain inefficient for data collection because of missing
information,
respondent error, and slow data analysis due to processing delay from
paper-to-computer
file conversion. The advantages of PDAs to improve trial efficacy,
quicken
data analysis, and even improve patient safety due to earlier
availability
of results of interim analyses, among others, are obvious. Text and
photo
data capture, transmission feasibility, and visual analogue scales have
been validatedref1,
ref2,
ref3,
ref4
(Sellors JW, Hayward R, Swanson G, et al. Comparison of deferral rates
using a computerized versus written blood donor questionnaire: a
randomized,
cross-over study [ISRCTN84429599]. BMC Public Health 2002; 2: 14). PDA
appliances can record, store, and transmit virtual electrocardiograms
and
electrochemical dataref1,
ref2.
There are comprehensive PDA-based data recorders that, in combination
with
a sensor vest, continuously encrypt and store patients' physiological
data
(ie, blood pressure, blood oxygen saturation, electroencephalograms,
electro-oculograms,
periodic leg movement, core body temperature, skin temperature, end
tidal
CO2, and cough) on a memory card. Patients could also record
time-stamped symptoms, moods, activities, and other endpoint-specific
information
in the recorder's digital diary. These features allow researchers to
correlate
multiple physiological indices that can be objectively measured with
subjective
inputref.
Clinical
research organisations have already discovered the advantages
of PDA-based data collection in clinical trials. One such organisation
and a major PDA manufacturer reported record sales of customised
electronic
diaries in 2004. This clinical research organisation has deployed 40000
electronic diaries in 46 languages to 48 countries for use in clinical
trials since 2000ref.
Several thousands of medical software applications and documents are
available for health-care professionals to use. Medical software can be
grouped into major categories: standard medical textbooks and manuals
adapted
for PDAs, PDA-designed medical references, medical dictionaries, drug
reference
and interaction check programmes, medical calculators, medical
prediction
rule applets (a Java software component), document readers, medical
image
viewers, software for medical evidence retrieval, subscription
platforms
to electronic newsletters or journal digests, educational programmes
for
medical students, and medical alerting messaging; comprehensive medical
enterprise solutions integrating with electronic medical records,
patient
management and scheduling systems, and electronic order, prescribing
and
pharmacy-dispensing systems, coding, billing and file-sharing. Software
and content are available from commercial suppliers, shareware and
freeware
distributors, health-care organisations, and PDA enthusiasts. The
quality
of medical software applications varies greatly and depends heavily on
accessibility of the information. Initially, most suppliers offered
static
translations of traditional textbooks that were difficult to navigate.
The market has now become more sophisticated, demanding more dynamic
content
with frequent updates taking advantage of the implementation of
wireless
networking protocols in PDAs. Internet websites are available to link
users
to sites dedicated to medical PDA use :
American College of
Physicians
: offers a portable decision support programme, Palm documents with
up-to-date
information on bioterrorism threats and substance, uniform requirements
for manuscripts from the International Committe of Medical Journal
Editors,
helpful tables for physicians instructing medical students and
residents,
an ethics manual, and many other helpful medical documents. Access is
free
British
Medical
Journal (BMJ) PDA webpages : list of medical resources available
for hand-held devices that can be accessed freely. This list contains
short
descriptions of the respective applications, comments from developers
and
users, and documents and links to distributors
Ectopic Brain
:
designed to serve as a startig point for physicians interested in
exploring
the potential of Palm OS hand-held computers in clinical practice.
Includes
a good introduction to hand-held computing, and a constantly updated
archive
of medical documents and software applications for Palm PDAs
Medical Piloteer
Webring
: dedicated to linking sites that provide medical or health-care
resources
for hand-held devices
Mobile Medica : publishes
practice
guidelines and meeting abstracts of several medical societies
Pediatrics on Hand :
outstanding
resource on pediatric PDA applications for health-care professionals
and
parents to use
Journal of Mobile Informatics
:
online journal offering medical PDA product reviews, discussion forums,
documents and software
Uniformed Services Academy of Family
Physicians
: offers free medical software applications and documents. Focuses on
primary
care, and emergency and military medicine, with a good introduction to
palm computing in general
University of Connecticut
:
one of the most comprehensive websites of medical hand-held computing
with
a superb link list
Vertical PDA Network : maintains
resources
dedicated to the promotion and education of health-care professionals
on
the use of mobile and wireless technology. Offers new articles, product
reviews, and software and hardware sales
General use or non-medical programmes (applications) for PDA users :
Amazon : online media dealer
that also
carries commercial medical PDA software and books.
American
College
of Physicians PierPDA : Pier PDA is a collection of evidence-based
medicine compiled by the American College of Physicians. The hand-held
version of PIER includes more than 235 modules focusing on the
diagnosis
and treatment of diseases, a drug database, a search engine with
bookmark
features, evidence indicators, and standard tables (available on a
subscription
basis and regularly updated).
Austin Physician Productivity,
LLC
: Stacoder offers several programmes for easy-to-use diagnosis (ICD-9),
procedure (CPT), and E&M coding. Additionally offers free
evidence-based
medicine PDA applications, such as a programmes for preoperative
cardiac
clearance, a cholesterol calculator, and a hypertension guideline based
on the JNC-VII recommendations.
CollectiveMed :
distributor
of mostly commercial medical media titles and software applications.
Duodecim Medical
Publications
Ltd : the mobile evidence-based medicine guidelines include more
than
900 articles. Individual treatment recommendations are supported by
evidence
summaries based on high-quality systematic reviews (issued in
cooperation
with John Wiley & Sons, the publisher of the Cochrane Library). The
texts of EBM Guidelines are concise and are read easily on hand-held
computers
(available on a subscription basis and regular updates).
Elsevier, Inc MD Consult
offers
a wealth of clinical resources in a modular format. Pocket Consult is
MD
Consult’s hand-held edition that offers access to the Mosby’s Drug
Consult
database, free medical calculators, a medical news and drug alert
service,
as well as the latest abstracts from hundreds of journals.
eMedicine : eMedicine is one of the
largest
websites with clinical knowledge for health professionals. Contains
articles
on 7000 diseases and disorders in 62 medical specialties. eMedicine
sells
its content in part or as a 65-volume total download for PDAs.
eMedicine’s
extensive collection of trauma, terrorism, and biological, and nuclear
warfare-related medical content is available as a free PDA eBook for
medical
professionals and the public.
Epocrates : provider of drug
information
and drug interaction check applications for PDA. The product portfolio
reaches from a basic free drug reference that is updated daily to a
comprehensive
subscription-based suite that includes additional content on herbal
medicines,
dose calculators, and references for integrated infectious diseases,
laboratories,
and general medicine. It includes non-FDAapproved indications. Can be
adapted
to health maintenance organisations and hospital formularies. Includes
customisable medical alert feature.
Franklin Electronic Publishers
:
Franklin has pioneered the market for electronic books in the early
1980s.
Proprietary hardware and software eBook readers distribute various
medical
titles that run on PDAs. Categories include diagnostic and therapeutic
manuals, drug references, medical dictionaries, medical student and
nursing
references.
Handango distributes roughly
1400
media titles and software in several categories such as medical
calculators,
charge capture-, patient tracking-, personal health care, and medical
specialty
applications.
Handheldmed is the
developer and
distributor of the pocket clinician library, a multitude of medical
reference
titles for hand-held devices (available on subscription). Handheldmed
also
makes a patient tracking application software to access detailed
patient
information on the go.
Healthypalmpilot is
part
of pdaMD (see below) and member of the medical piloteer web-ring of
more
than 40 websites dedicated to medical palm computing. Healthypalmpilot
lists more than 860 recourses classified in identification and
classification
of disease processes, lifestyle modifications and alternative medicine,
procedural and treatment protocols, laboratory tests and other
diagnostic
programmes, medical record tracking, references and research
applications.
Iatrosoft, Inc Patient Palm OS
is a data management programme for Palm PDAs. ePatient enables the user
to more efficiently access, collect, and organise medical information
at
the point of care. This application suite allows secure patient and
scheduling
data acquisition, transmission, prescription writing, and photo
documentation,
and comes with an integrated drug and toxicology database.
InfoPOEMs, Inc
POEM
(patient-oriented evidence that matters) company allows subscribers to
search for evidence in the database, all Cochrane overview.html
systematic
review abstracts, 220 decision rules, use almost 2500 predictive
calculators,
775 summaries of evidence-based practice guidelines, and more. The
product
is combined with Griffith’s 5-Minute Clinical Consult, an ICD9 look-up,
and an E&M coding assistant for payments.
InfusiCalc is a drug
calculator
and information reference—for easy calculations of any kind of
intravenous
medication. Information about drugs can be calculated, stored, and
applied
to any individual or clinical situation, allowing for critical
infusions
to be rapidly and accurately commenced. Now includes weight-based bolus
dosing.
MedCalc is one of the
comprehensive,
free, frequently updated medical calculators that help to compute
common
medical problems in 79 formulas. Units can be switched from SI to
conventional.
Most equations come with bibliographic references and clinical tips.
Designed
by Dr Mathias Tschopp.
Medical Eponyms :
collection
of more than 1500 medical eponyms.
Medical Piloteer :
medical
software for students with excellent illustrations.
MedRules is
a free
application featuring useful clinical prediction rules taken from the
medical
literature. Complete references for each rule may be found by clicking
the question mark icon in the upper-right corner of the screen.
Designed
by Dr Kent E Willyard.
Medtopia, Inc Mobile is a
comprehensive
medical office service that is based on a preconfigured PDAs that
allows
the physician to look up all appointments, medical records, surgeries,
among others, while on the move. The physician can capture office
visits,
laboratories, and procedures, write prescriptions, and document visits
as he treats patients throughout the day.
MemoWare : repository of some
500
free medical documents in various reader formats.
Palmgear : with roughly 26
000 software
titles Palmgear is one of the largest Palm OS software repositories on
the internet. Palmgear also carries 620 medical software titles.
Patientkeeper, Inc is an
integrated
solution in PDA-based patient management that support care teams in a
single
hospital or across an entire healthcare system. It allows accessing and
clinical information management at the point of care, capturing billing
and coding data, recording dictation with the PDA, and also has
integrated
prescription functionality.
PEPID is a collection of
detailed medical
and drug reference notes plus calculators combined geared towards
emergency
physicians, nurses and other emergency personnel. The reference
materials
are updated regularly and also be combined (available on a subscription
basis).
Publicis eHealth Solutions’s
:
free subscription service that delivers peer-reviewed journal abstracts
from the National Library of Medicine and news articles from
Journal-to-go
Reuters. Information received can be customised.
RediReference
Biweekly
clinical newsletter provides a digest of primary-care-related articles
published during the preceding weeks by major Clinical Update journals
to paying subscribers.
Shots
2005 is a quick reference guide to the 2005 Childhood Immunization
Schedule, a collaboration of the Advisory Committee on Immunization
Practices
(ACIP), the American Academy of Pediatrics (AAP), and the American
Academy
of Family Physicians (AAFP), and the 2004–2005 Adult Immunization
Schedule,
recommended by the Advisory Committee on Immunization Practices (ACIP).
Details on each vaccine are available by clicking on the vaccine names.
Skyscape carries one of the
largest
portfolios of medical hand-held references available. Skyscape does not
create its own content. It provides more than 240 standard medical
references,
from more than 30 major medical publishers for over 35 medical
specialties.
Skyscape also offers customisable channel content.
Thomson Scientific & Healthcare
provides a broad-spectrum drug, education, and clinical information
media
and software tools. The widely used and well-known US Physician’s Desk
Reference (PDR) and the Micromedex US Pharmacopoeia Drug Information
(USPDI)
were recently made available for PDA use. While the mobilePDR is free
to
US physicians, mobile Micromedex is a subscription-based application.
The
mobile Micromedex USPDI also includes non-FDA approved indications.
TNM
Staging : free TNM Staging Tool from the University of Southern
California
Unboundmedicine :
developer
and distributor of medical references and software applications using
the
proprietary CogniQ platform in association with medical publishers,
professional
associations, information aggregators, medical institutions, and
pharmaceutical
and supply companies.
.... guidelines from professional societies or health-care agencies ...
:
American
College
of Cardiology (ACC) : the AAC has contracted Skyscape to convert
its
guidelines into a PDA accessible form. The application can be
downloaded
for free from the ACC website
American
College
of Chest Physicians (AACP) : the 7th AACP conference on
antithrombotic
and thrombolytic therapy, ACP/AACP management of acute exacerbations of
COPD algorithm, guidelines for diagnosis and management of lung cancer,
assessment of diagnostic tests for ventilator-associated pneumonia,
guidelines
for weaning and discontinuing ventilatory support, and a guideline on
cough
management as a defence mechanism and as a symptom, as well as
pulmonary
rehabilitation are available for download
American
College
of Physicians (ACP) : the ACP offers updates for subspecialties,
ACP multiple small feedings of the mind, common ICD-9 codes, guide to
bioterrorism
identification, and guides to chemical terrorism identification through
Apprisor for PDA download. On its own website, the ACP provides several
documents and guides for evidence-based medicine approach to common
medical
problems
DHHS HIV and AIDS
medical
practice guidelines : the US department of Health and Human
Services
offers guidelines for the use of anti-retroviral drugs in HIV-infected
children, adolescents, pregnant women, and adults, and the prevention
of
opportunistic infections in patients with HIV
DHSS Info Drug database
and HIV/AIDS
glossary : the US Department of Health and Human Services offer a
programme
for possible sexual, injecting-drug-use, or other
non-occupational
exposure to HIV, including considerations related to antiretroviral
therapy
National
Cancer
Institute (NCI ) Cancer Staging and Treatment : the NCI Cancer
Information Service offers one of the most comprehensive cancer
databases.
It contains peer-reviewed summaries on cancer treatment, screening,
prevention,
genetics and supportive care, and complementary and alternative
medicine;
a registry of about 2000 open and 13,000 closed cancer clinical trials
worldwide; and directories of physicians, professionals who provide
genetic
services, and organizations that provide cancer care
National Guideline Clearinghouse
(NGC)
PDA documents : public resource for evidence-based clinical
practice
guidelines. NGC is a collaborative initiative of the US Agency for
Healthcare
Research and Quality (AHRQ) and US Department of Health and Human
Services.
NGC was originally created by AHRQ in partnership with the American
Medical
Association and the American Association of Health Plans. All NGC
summaries
are available in a text format and downloadable to PDAs
Obesity Education
Initiative
(OEI) guidelines on overweight and obesity in adults : the NHLBI
offers
evidence-based medicine applications for asthma treatment, cholesterol
management, and obesity education in its own website. Through Apprisor
the NHLBI distributes the joint national committe on management of
hypertension
(JNC 7) report reference card and the NCEP ATP III quick reference
The
National
Quality Measures Clearinghouse (NQMC), sponsored by the AHRQ,
US Department of Health and Human Services, is a database and website
for
information on specific evidence-based health care quality measures.
NQMC
is sponsored by AHRQ to promote widespread access to quality measures
by
the health care community and other interested individuals. Brief
summaries
of all measures can be viewed and downloaded in various formats,
including
PDA-compatible formats.
US Agency for
Healthcare
Research and Quality (AHRQ) Pneumonia tool : useful interactive
application
to assist clinicians in determining the most appropriate care for newly
diagnosed cases of community-acquired pneumonia at the point of care.
It
will help calculate the severity index of a pneumonia patient. The
output
includes mortality rates and pneumonia class types.
USPSTF clinical
preventive
services PDA programme : the USPSTF also offers an interactive PDA
application that identifies clinical preventive services for screening,
counselling, and preventive medication based on the patient's age, sex
and pregnancy status. The application is slow and consumes a lot of
memory.
An alternative programme named "Recs" was developed by Dr.Jeff Weinfeld
at Georgetown University, Washington, DC, USA. It is easy to install
and
covers the same content. This program can be downloaded here
..., and helpful programmes for general PDA use :
Adobe Acrobat Reader for Palm :
free
document reader that allows reading, sharing, and printing of Adobe PDF
documents on Palm PDAs
AvantGo : free subscription
service
that delivers information in customisable channels to Palm PDAs.
Dataviz Documents : commercial
application
that allows converting, reading, and editing of Microsoft Office
documents
and other documents into Palm format.
Instep Print and Fax : 2
integrated
shareware utilities that allow to print from all intrinsic Palm and
other
applications using the built in infrared port.
iSilo Document Reader : shareware
multiple
format document reader that also reads most professional societies’ and
evidence-based medicine guidelines.
iSiloX Conversion Tool : free
programme that
allows converting web pages into iSilo reader documents to be viewed on
Palm PDAs.
Additionally, some medical journals such as the Journal of the American
Medical Association (JAMA) regularly announce and discuss novel
hand-held
computer software titles. Examples of common medical software :
A personal perspective
As an internist and gastroenterologist, I face the same challenges
that all academic physicians do: attending on the wards, clinics,
critical
care units, and emergency rooms; doing consultations for other
specialties;
dealing with numerous conferences, administrative work, lecturing, and
bedside teaching; being an investigator in clinical trials; mentoring
doctoral
students; and running a basic science research laboratory, which often
hardly fit into those 24 hours, unless one is very organised. I
perceived
the arrival of the Palm Pilot (Palm Inc, Sunnyvale, CA, USA) in 1996 as
a blessing; it quickly changed how I organised my day, kept abreast of
the ever-changing specialties of medicine and biomedical science,
obtained
and accessed medical information, and taught students. My old spiral
notebook
is retired now. On a typical day, my PDA wakes up 30 minutes before I
do,
logs on to my notebook as well as the internet, and synchronises and
updates
all PDA applications. Not only are contacts, appointments, and medical
references kept up to date in this way, but my PDA e mail application
is
also programmed to retrieve exclusive e mails such as electronic tables
of contents from medical journals, alerts from the FDA Medwatch system,
and other resources in my e mail inbox. On the way to work I can
review,
mark, and erase these e mails. Once at the hospital, my PDA reminds me
of conferences, meetings, and displays a to-do list for the day. When I
see patients, I rely on drug reference and interaction applications,
institutional
microbial spectra databases, medical calculators, prediction rules, and
specific topics in PDA editions of popular medical reference manuals.
Additionally,
I have many guidelines from our institution, professional
organisations,
and agencies, as well as pdf excerpts from journal articles stored in
my
memory card. I do not believe PDA versions of large medical textbooks
are
helpful, because they are often difficult to navigate. I am currently
investigating
the usefulness of the new PubMed on Tap programme, whenever I have
access
to the institutional W-LAN. Our department receives a fair amount of
patients
with gastrointestinal cancers. Staging of uncommon cancers is easy with
a TNM (tumour, node, metastasis) staging programme. I can customise and
print actual chemotherapy protocols with a shareware application. An
add-on
to this shareware application allows me to programme and print
protocols
for rare cancers. At ward rounds with students, I take full advantage
of
the multimedia capabilities of my PDA: I can display images from my
personal
medical image library or other PDA reference materials, play heart
murmur
or lung sound recordings, and use the screen to quickly sketch
something
to make a teaching point. I can carry and share with students (via
infrared)
a self-created collection of text notes, customised to the patients we
see together. In clinics where I see many patients enrolled in clinical
trials, I quickly enter, access, and sort essential data on
spreadsheets.
The spreadsheets were created with my notebook spreadsheet application,
transferred to and updated on my PDA with a commercial programme. This
software also helps me to review and store my presentations for
lectures
and talks. New versions of PDAs can also act as USB memory sticks. At
the
end of the day, my PDA synchronises and backs up the day's data with my
notebook before it charges for the night
PDA safety and security
Information recording and interchange always raise the question of
security and privacy. Overall, PDA security hazards are probably
similar
to other computers used in hospitals and elsewhere. Catastrophic data
loss
can only be prevented with regular backups. PDA viruses have been
reported
for the mobile operating system from Microsoft Windows and also, to a
lesser
degree, for the Palm operating system. Major security firms are
addressing
this problem with the development of commercial antivirus products for
hand-held devices. In the USA, PDA-based patient data processing and
storage
must comply with the Health Information Portability and Accountability
Act (HIPAA) of 1996. The Centers of Medicare and Medicaid Services
(CMS)
of the US Department of Health and Human Services provide information
technology
professionals and the general public with extensive resources to
address
the issues on their information security programme websiteref.
In
addition to a general policy manual, the CMS has outlined fundamental
regulations as well as system architecture and security requirements
for
the acquisition, storage, management, and transmission of patient data.
As a general rule, these policies are developed to provide a
defence-in-depth
security structure, along with a least-privilege approach and a
need-to-know
basis for all information access. Developers can download security
threat
identification resources based on their occurrence and importance in
the
current CMS environment. Before approval, applications have to pass the
contractor assessment security tool (CAST) test to record their
compliance
with the CMS. Several specific new risks and vulnerabilities arise with
wireless networks. Bluejacking (ie, unauthorised accessing of
Bluetooth-enabled
devices) in airports and other public places is advancing to a new
hacker
sport. These problems need to be addressed by hardware and software
makers
with improved encryption and authentication technology. Currently, no
evidence
shows that wireless-enabled PDAs interfere with the functioning of
implanted
cardiac pacemakers or defibrillatorsref1,
ref2,
ref3
(Chen D, Soong SJ, Grimes GJ, Orthner HF. Wireless local area network
in
a prehospital environment. BMC Med Inform Decis Mak 2004; 4: 12)
Challenges of current PDA technology and future outlook
Evidence of PDA use and dominance in medical education, clinical
practice,
and research is still evolving. Most studies available so far have not
been randomised, controlled, or are multicentric in design. The fact
that
physicians can carry an entire shelf of medical reference textbooks on
a hand-held computer's memory card does not automatically mean that
physicians
know their contents or can apply their knowledge appropriately in
clinical
practice. The increasing incidence of the so-called palmomental reflex
by residents and medical students should remind clinical educators that
PDAs are not peripheral brains and are a poor substitute for ad-hoc
clinical
knowledgeref.
At
a time when governments, health-care organisations, and insurers
worldwide
cannot stop entertaining the themes of necessity assessment, cost
saving,
and down-sizing, we need convincing arguments that the extra expenses
of
investment into PDA technology can actually improve quality of care,
save
lives, and ultimately save health-care costsref.
The
IT industry has recognised health care as the next big marketref1,
ref2.
It
will be up to health-care professionals who depend on PDAs to inform
PDA manufacturers of users' true needs, do the necessary research, and
actively direct the development of new hardware and software. The
future
of information exchange in medicine is digital and wirelessref.
What
will a medical PDA look like in 2015? It will probably be housed in
a ceramic or lightweight alloy case, and hopefully be no larger but
substantially
lighter than current shirt-pocket-sized devices. New semiconductor
technology
will allow hand-held computers to be equipped with processors that can
handle much more work than the best desktop systems that are currently
available, while consuming less power to extend battery life. Memory
will
no longer be an issue, because data will be mainly kept in network
storage
systems. Manual data entry is still a problem in current versions of
PDAs.
In the future, authorised, secure logons to the PDA and data entry will
be done with combined speech and fingerprint recognition by
sophisticated
audio hardware and a new high-resolution generation of touch-sensitive
screens. Graffiti 2 (PalmSource Inc) characters will be further
developed
into true handwriting recognition. Speech processing will also be a
reality,
replacing many dictation methods currently used. Very high network
speeds
will provide immediate access to clinical and administrative data,
including
imaging information such as procedural movies; three-dimensional
ultrasonography;
CT, MRI, or PET scans; histological slides; microbial cultures; and
institutional
and remote reference systems at any place and time. Medical
applications
will go beyond organisation and storage of information. PDAs could
evolve
into expert systems that access information from many sources (ie,
classic
textbook style references, data from basic and clinical research and
genome
scans, environmental and public-health information, and results from
ongoing
clinical trials, match the information with the patient's medical
records
from current or past admissions or visits, apply prediction rules,
calculate
clinical equations, and integrate all the data into an overall
information
package for clinicians. Users will be able to obtain and share opinions
on patients with colleagues and international experts with ad-hoc
medical
multimedia conferencing. PDA-based medical information management could
even have an environmental effect that goes beyond paper-saving. The
environmental
effects of two applications of wireless technologies were compared with
those of conventional technologiesref.
Compared
with the use of a newspaper, users receiving the news on a PDA
resulted in the release of 32–140 times less CO2, several
orders
of magnitude less NOx and SOx, and the use of 26–67 times less water
than
the use of newspapers. Wireless teleconferencing resulted in one to
three
orders of magnitude less CO2, NOx, and SO2
emissions
than those from business travel. Is this future scenario widely off the
mark? Perhaps so, but critics should remember that many theoretical
predictions
of the future have inspired the design of devices used today. However,
it is still certain that no computer system can ever replace dedicated,
experienced clinicians and their empathic interaction with patients and
familiesref.
Having doubled between 1998 and 2003, the NIH budget is expected to
be $28.6 billion for fiscal year 2007, a 0.1% decrease from last year
(Office
of Budget. FY 2007 budget in brief: advancing the health, safety, and
well-being
of our people. Washington, D.C.: Department of Health and Human
Services,
2006), or a 3.8% decrease after adjustment for inflation — the first
true
budgeted reduction in NIH support since 1970. Whereas national defense
spending has reached approximately $1,600 per capita, federal spending
for biomedical research now amounts to about $97 per capita — a rather
modest investment in "advancing the health, safety, and well-being of
our
people."1 Annualized growth rates (adjusted for inflation) of the NIH
budget,
1971 to 2005 :
Meanwhile, for > 10 years, the pharmaceutical industry has been
investing
larger amounts in research and development than the federal government
— $51.3 billion in fiscal year 2005ref,
for
instance, or 78% more than NIH funding that year. Korn and colleagues
have argued that stability and quality can be ensured by maintaining
overall
funding at an annual growth rate of 8 to 9 percent (unadjusted for
inflation)ref.
The
annual required growth rate should rather be 5-6% real growth plus
inflation: the annual growth rate over the past 30 years has been
approximately
10%, which reflects an annual average real growth rate of 5.2% and an
average
inflation rate of 4.8% (ranging from 1.1 to 13.3%)ref.