Email forwarding for @cs.stanford.edu is changing. Updates and details here .

Research & Impact

top banner

Making an Impact for a Better World

As computing continues to transform our world, the research we're pursuing at Stanford Computer Science seeks to ethically create, shape, and empower the new frontier. From the latest in robotics to foundation models to cryptocurrency, Stanford computer scientists are making an impact on the world beyond our academic walls. 

Faculty Spotlight: Omar Reingold

Faculty Spotlight:  Omer Reingold, the Rajeev Motwani Professor in Computer Science

"A computer scientist teaching a theater class is a bit unusual, I’ll grant you that. But is it so strange? For me, classifying different parts of campus to left-brain-versus-right-brain kind of thinking is just an unfortunate stereotype. I'd much rather go with ‘creativity is creativity is creativity.'" Read Omer Reingold's Story  

In the News: See Our Research in Action

Soda24 Best Paper Award winners

Best Paper Award: "Breaking the Metric Voting Distortion Barrier"

Stanford professor, Moses Charikar, and his two co-authors, Kangning Wang (postdoc) and Prasanna Ramakrishnan (PhD student), win Best Paper Award at the ACM-SIAM Symposium on Discrete Algorithms (SODA24).

Click here to read more as Kangning and Prasanna discuss their passion for research, the challenges they faced, and the significance of this award.

importance of research in computer science

A Robotic Diver Connects Human's Sight and Touch to the Deep Sea

News

The Future of AI Chat: Foundation Models and Responsible Innovation

Guest Percy Liang is an authority on AI who says that we are undergoing a paradigm shift in AI powered by foundation models, which are general-purpose models trained at immense scale, such as ChatGPT.

CS Faculty & Their Research

Explore our network of faculty members and the innovation conceived by their research. They are shaping a new era of solutions and the next generation of thought leaders and entrepreneurs.

2023-04-12 collage of several Stanford computer science faculty Mendel Rosenblum, Mehran Sahami, Ron Dror, Sanmi Koyejo, and Diyi Yang.

Meet Our Faculty & Their Research

Stanford Computer Science faculty members work on the world's most pressing problems, in conjunction with other leaders across multiple fields. Fueled by academic and industry cross-collaborations, they form a network and culture of innovation.

The Emmy Award-winning video looks back at a remarkable six decades of AI work at Stanford University.

Stanford has been a leader in AI almost since the day the term was dreamed up by John McCarthy in the 1950s. McCarthy would join the Stanford faculty in 1962 and found the Stanford Artificial Intelligence Lab (SAIL), initiating a six-decades-plus legacy of innovation. Over the years, the field has grown to welcome a diversity of researchers and areas of exploration, including robotics, autonomous vehicles, medical diagnostics, natural language processing, and more. All the while, Stanford has been at the forefront in research and in educating the next generation of innovators in AI. Artificial intelligence would not be what it is today without Stanford.  

23023-04-12 photo collage of several Stanford Computer Science faculty Chris Re, Chris Manning, Tatsu Hashimoto, Kayvon Fatahalian, and Chelsea Finn.

Research at the Affiliate Programs

Stanford Computer Science has a legacy of working with industry to advance real-world solutions. Membership in our affiliate programs provides companies with access to the research, faculty, and students to accelerate their innovations.

2023-04-17 Joseph Huang portrait

Join the Affiliates Programs

Interested in the benefits of memberships to our affiliate programs, sponsored research, executive education programs, or student recruitment? Get started by contacting:

Joseph Huang, PhD | Executive Director of Strategic Research Initiatives Stanford University, Computer Science [email protected]  

Connecting Students & Research: Jump In

At Stanford, students do amazing research. Their projects are widely recognized as some of the best in the world. Stanford's reputation as one of the top CS programs comes in large part from this. If you're a student with a passion for participating in meaningful research, our CURIS and LINXS programs are designed to get you started.

2023 LINX and INSPiRE-CS cohort

LINXS Program

The Stanford LINXS Program is an eight-week summer residential program that brings innovative undergraduates, who are currently attending Historically Black Colleges & Universities and Hispanic Serving Institutions, to Stanford for an immersive academic research and graduate school preparation experience. 

CURIS 2023 cohort event montage

CURIS Program

CURIS is the undergraduate research program of Stanford's Computer Science Department. Each summer, 100+ undergraduates conduct and participate in computer science research advised and mentored by faculty and PhD students.  

students collaborating

Logo

Departments

  • Applied Physics
  • Biomedical Engineering
  • Center for Urban Science and Progress
  • Chemical and Biomolecular Engineering
  • Civil and Urban Engineering
  • Computer Science and Engineering
  • Electrical and Computer Engineering
  • Finance and Risk Engineering
  • Mathematics
  • Mechanical and Aerospace Engineering
  • Technology, Culture and Society
  • Technology Management and Innovation

Degrees & Programs

  • Bachelor of Science
  • Master of Science
  • Doctor of Philosophy
  • Digital Learning
  • Certificate Programs
  • NYU Tandon Bridge
  • Undergraduate
  • Records & Registration
  • Digital Learning Services
  • Teaching Innovation
  • Explore NYU Tandon
  • Year in Review
  • Strategic Plan
  • Diversity & Inclusion

News & Events

  • Social Media

Looking for News or Events ?

Research in Computer Science

Javascript code on screen

On this Page

Security and privacy.

A stable, safe, and resilient cyberspace is vital for our economic and societal wellbeing. This concentration helps students learn how to fortify cyber networks, combat threats, and foster “white hat” hacking. Researching systems allows for students to improve real-world systems to make them stronger and securer. This also includes data-driven analysis of privacy and social networks. After graduation, our students often work either in private corporations or in governments.

Labs:  OSIRIS ,  C CS

Sample research projects:

Screenshot of Damon McCoy's PharmaLeaks presentation

Damon McCoy ,  one of the department's newest faculty members, researched counterfeit pharmacy affiliate networks. Online sales of counterfeit or unauthorized products drive a robust underground advertising industry that includes email spam, “black hat” search engine optimization, forum abuse and so on. Virtually everyone has encountered enticements to purchase drugs, prescription-free, from an online “Canadian Pharmacy.” However, even though such sites are clearly economically motivated, the shape of the underlying business enterprise is not well understood precisely because it is “underground.”

Learn more about the business of online pharmaceutical affiliate programs

Example of Digital Assembly technology.

Learn more about Digital Assembly

Seattle skyline

Learn more about Seattle Open Peer-to-Peer Computing

Retirement portfolio simulator

Big Data Management, Analysis, and Visualization

The organization and governance of large volumes of data. This concentration allows for retaining data obtained from a large number of sources — from a large city, to individuals, and anywhere in between — and ensures a high level of data quality for analytical purposes. The visualization of such data elegantly brings structure and simplicity to it.

Labs:   CUSP

Screenshot of RevEx

Learn more about RevEx and download the demo

Example of neuroimaging

In this related paper, Gerig studies the early developing brain by displaying the longitudinal MRI scans of the same subject's brain at various ages, from two weeks to two years.

Learn more about  Prof. Gerig's study

Figure graphing the prevalence of activity-related interests and obesity in the US. Figure graphing the prevalence of activity-related interests and obesity in the US

In Prof. Chunara's research on US obesity rates, for example, Facebook is used to cross-measure user interests and obesity prevalence within certain metroplitan populations. Activity-related interests across the US and sedentary-related interests across NYC were significantly associated with obesity prevalence.

Learn more about Chunara's study

Graph exemplifying building data analysis

Prof. Ergan is also the head of  the Future Building Informatics and Visualization Lab (biLab).

Game Engineering and Computational Intelligence

For students who are interested in learning game programming and taking part in game development and design. Computer graphics, human-computer interaction, artificial intelligence, and allied computational fields all play a role in this burgeoning industry. Art and engineering intersect to create innovative game environments that captivate players.

Labs:  Game Innovation Lab ,  MAGNET

Professor  Julian Togelius  specializes in artificial intelligence, and has programmed AI agents that play several existing video games. In the clip above, an AI agent plays through Super Mario Bros.

Learn more about Professor Julian Togelius's project

Algorithms and Foundations

The theoretical study of computer science allows us to better understand the capabilities and the limitations of exactly what problems computers can solve, and when they can solve those problems efficiently. New theory helps pave the way for algorithmic breakthroughs that engineers can build on to create new solutions and technology. At NYU Tandon, the Algorithms and Foundations group is composed of researchers interested in applying mathematical and theoretical tools to a variety of disciplines in computer science, from machine learning, to computational science, to geometry, to computational biology, and beyond.

Christopher Musco  and doctoral student Raphael A. Meyer wrote a paper titled “Hutch++: Optimal Stochastic Trace Estimation” that introduces an new randomized algorithm for implicit trace estimation, a linear algebra problem with applications ranging from computational chemistry, to understanding social networks and deep neural networks. Their method is the first to improve on the popular Hutchinson’s method for the problem, which was introduced over 30 years ago. Read the paper

Lisa Hellerstein  is the co-author of "The Stochastic Score Classification Problem." This paper presents approximation algorithms for evaluating a symmetric Boolean function in a stochastic environment. The algorithms address problems where the goal is to determine the order in which to perform a sequence of tests, so as to minimize expected testing cost. Read the paper

importance of research in computer science

  • Values of Inclusion
  • 2020 Antiracism Task Force
  • 2022 DEI Report
  • Research News
  • Department Life
  • Listed by Recipient
  • Listed by Category
  • Oral History of Cornell CS
  • CS 40th Anniversary Booklet
  • ABC Book for Computer Science at Cornell by David Gries
  • Books by Author
  • Books Chronologically
  • The 60's
  • The 70's
  • The 80's
  • The 90's
  • The 00's
  • The 2010's
  • Faculty Positions: Ithaca
  • Faculty Positions: New York City
  • Lecturer Position: Ithaca
  • Post-doc Position: Ithaca
  • Staff/Technical Positions
  • Ugrad Course Staff
  • Ithaca Info
  • Internal info
  • Graduation Information
  • Cornell Learning Machines Seminar
  • Student Colloquium
  • Fall 2024 Colloquium
  • Conway-Walker Lecture Series
  • Salton 2024 Lecture Series
  • Spring 2024 Artificial Intelligence Seminar
  • Spring 2024 Robotics Seminar
  • Spring 2024 Theory Seminar
  • Big Red Hacks
  • Cornell University - High School Programming Contests 2024
  • Game Design Initiative
  • CSMore: The Rising Sophomore Summer Program in Computer Science
  • Explore CS Research
  • ACSU Research Night
  • Cornell Junior Theorists' Workshop 2023
  • Researchers
  • Ph.D. Students
  • M.Eng. Students
  • M.S. Students
  • Ph.D. Alumni
  • M.S. Alumni
  • List of Courses
  • Course and Room Roster
  • CS Advanced Standing Exam
  • Architecture

Artificial Intelligence

Computational biology, database systems, human interaction, machine learning, natural language processing, programming languages, scientific computing, software engineering, systems and networking, theory of computing.

  • Contact Academic Advisor
  • Your First CS Course
  • Technical Electives
  • CS with Other Majors/Areas
  • Transfer Credits
  • CS Honors Program
  • CPT for International CS Undergrads
  • Graduation Requirements
  • Useful Forms
  • Becoming a CS Major
  • Requirements
  • Game Design Minor
  • Co-op Program
  • Cornell Bowers CIS Undergraduate Research Experience (BURE)
  • Independent Research (CS 4999)
  • Student Groups
  • UGrad Events
  • Undergraduate Learning Center
  • UGrad Course Staff Info
  • The Review Process
  • Early M.Eng Credit Approval
  • Financial Aid
  • Prerequisites
  • The Application Process
  • The Project
  • Pre-approved Electives
  • Degree Requirements
  • The Course Enrollment Process
  • Advising Tips
  • Entrepreneurship
  • Cornell Tech Programs
  • Professional Development
  • Contact MEng Office
  • Career Success
  • Applicant FAQ
  • Computer Science Graduate Office Hours
  • Exam Scheduling Guidelines
  • Graduate TA Handbook
  • MS Degree Checklist
  • MS Student Financial Support
  • Special Committee Selection
  • Diversity and Inclusion
  • Contact MS Office
  • Ph.D. Applicant FAQ
  • Graduate Housing
  • Non-Degree Application Guidelines
  • Ph. D. Visit Day
  • Advising Guide for Research Students
  • Business Card Policy
  • Cornell Tech
  • Curricular Practical Training
  • A & B Exam Scheduling Guidelines
  • Fellowship Opportunities
  • Field of Computer Science Ph.D. Student Handbook
  • Field A Exam Summary Form
  • Graduate School Forms
  • Instructor / TA Application
  • Ph.D. Requirements
  • Ph.D. Student Financial Support
  • Travel Funding Opportunities
  • Travel Reimbursement Guide
  • The Outside Minor Requirement
  • CS Graduate Minor
  • Outreach Opportunities
  • Parental Accommodation Policy
  • Special Masters
  • Student Spotlights
  • Contact PhD Office

Search form

importance of research in computer science

You are here

The computing and information revolution is transforming society. Cornell Computer Science is a leader in this transformation, producing cutting-edge research in many important areas. The excellence of Cornell faculty and students, and their drive to discover and collaborate, ensure our leadership will continue to grow.

The contributions of Cornell Computer Science to research and education are widely recognized, as shown by two Turing Awards, two Von Neumann medals, two MacArthur "genius" awards, and dozens of NSF Career awards our faculty have received, among numerous other signs of success and influence.

To explore current computer science research at Cornell, follow links at the left or below.

Research Areas

ai icon

Knowledge representation, machine learning, NLP and IR, reasoning, robotics, search, vision

Computational Biology

Statistical genetics, sequence analysis, structure analysis, genome assembly, protein classification, gene networks, molecular dynamics

Computer Architecture and VLSI

Computer Architecture & VLSI

Processor architecture, networking, asynchronous VLSI, distributed computing

Database Systems

Database systems, data-driven games, learning for database systems, voice interfaces, computational fact checking, data mining

Graphics

Interactive rendering, global illumination, measurement, simulation, sound, perception

Human Interaction

HCI, interface design, computational social science, education, computing and society

Artificial intelligence, algorithms

Programming Languages

Programming language design and implementation, optimizing compilers, type theory, formal verification

Robotics

Perception, control, learning, aerial robots, bio-inspired robots, household robots

Scientific Computing

Numerical analysis, computational geometry, physically based animation

Security

Secure systems, secure network services, language-based security, mobile code, privacy, policies, verifiable systems

computer code on screen

The software engineering group at Cornell is interested in all aspects of research for helping developers produce high quality software.

Systems and Networking

Operating systems, distributed computing, networking, and security

Theory

The theory of computing is the study of efficient computation, models of computational processes, and their limits.

importance of research in computer science

Computer vision

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals

Computer science articles from across Nature Portfolio

Computer science is the study and development of the protocols required for automated processing and manipulation of data. This includes, for example, creating algorithms for efficiently searching large volumes of information or encrypting data so that it can be stored and transmitted securely.

importance of research in computer science

AI produces gibberish when trained on too much AI-generated data

Generative AI models are now widely accessible, enabling everyone to create their own machine-made something. But these models can collapse if their training data sets contain too much AI-generated content.

  • Emily Wenger

importance of research in computer science

A multi-task learning strategy to pretrain models for medical image analysis

Pretraining powerful deep learning models requires large, comprehensive training datasets, which are often unavailable for medical imaging. In response, the universal biomedical pretrained (UMedPT) foundational model was developed based on multiple small and medium-sized datasets. This model reduced the amount of data required to learn new target tasks by at least 50%.

Latest Research and Reviews

importance of research in computer science

Metaheuristics based dimensionality reduction with deep learning driven false data injection attack detection for enhanced network security

  • Thavavel Vaiyapuri
  • Huda Aldosari

importance of research in computer science

Prolonged exposure to mixed reality alters task performance in the unmediated environment

  • Xiaoye Michael Wang
  • Daniel Southwick
  • Timothy N. Welsh

importance of research in computer science

Quantum computational finance for martingale asset pricing in incomplete markets

  • Patrick Rebentrost
  • Alessandro Luongo

importance of research in computer science

The NACOB multi-surface walking dataset

  • Oussama Jlassi
  • Vaibhav Shah
  • Philippe C. Dixon

importance of research in computer science

Performance analysis of multi-angle QAOA for \(p > 1\)

  • Igor Gaidai
  • Rebekah Herrman

importance of research in computer science

Cosine similarity-guided knowledge distillation for robust object detectors

  • Sangwoo Park
  • Donggoo Kang
  • Joonki Paik

Advertisement

News and Comment

importance of research in computer science

AI-driven autonomous microrobots for targeted medicine

Navigating medical microrobots through intricate vascular pathways is challenging. AI-driven microrobots that leverage reinforcement learning and generative algorithms could navigate the body’s complex vascular network to deliver precise dosages of medication directly to targeted lesions.

  • Mahmoud Medany
  • S. Karthik Mukkavilli
  • Daniel Ahmed

importance of research in computer science

Don’t flock to faulty AI fashion

  • Mark Buchanan

importance of research in computer science

ChatGPT has a language problem — but science can fix it

The Large Language Models that power chatbots are known to struggle in languages outside of English — this podcast explores how this challenge can be overcome.

  • Nick Petrić Howe

importance of research in computer science

Why ChatGPT can't handle some languages

In a test of the chatbot's language abilities it fails at certain languages.

6G: the catalyst for artificial general intelligence

6G might integrate 5G and AI to merge physical, cyber and sapience spaces, transforming network interactions and enhancing AI-driven decision-making and automation. The semantic approach to communication will train AI while selectively informing on goal achievement, moving towards artificial general intelligence, presenting new challenges and opportunities.

  • Emilio Calvanese Strinati

importance of research in computer science

Slow productivity worked for Marie Curie — here’s why you should adopt it, too

Do fewer things, work at a natural pace and obsess over quality, says computer scientist Cal Newport, in his latest time-management book.

  • Anne Gulland

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

importance of research in computer science

How to do good research

by Gregor v. Bochmann, School of Information Technology and Engineering (SITE), University of Ottawa (This text was prepared in October 2009 at the Hunan University of Science and Technology in Xiangtan, China)

Criteria for funding research in Canada : the most important source of funding for university-based research in computer science is the Natural Sciences and Engineering Research Council (NSERC)

  • Discovery Program (unconstrained basic and applied research) – relatively small amounts of money
  • Good researcher (past performance)
  • Good research proposal (innovative, relevant to current state of the art in theory or practice, well explained and justified)
  • Good opportunity for training of researchers at the Master, PhD and post-doctoral levels
  • Collaborative research (with industry involvement) – relatively large amounts of money
  • Additional criteria: potential application in the industrial context
  • Evidence of industrial interest: (1) letters of support explaining relevance; (2) industrial funding: for many programs, the budget is based on matching industry funds

How to do good research : Important points (overview)

  • Choose an interesting area for research
  • Identify an interesting research topic (a problem for which there is no good solution)
  • Have some good idea how to improve the state of the art
  • Apply it to some examples (realistic case studies, if possible)
  • Prove some properties of your approach (logical properties or analytical performance predictions) and show that it is better than the current state of the art
  • Do simulation studies (e.g. for performance) and show that your approach  is better than the current state of the art
  • Build a software tool that supports your approach
  • Do a systematic comparison with other approaches to the same problem and discuss advantages AND disadvantages of your approach
  • Write up your results in some papers which make these results accessible to the interested expert.

How to do good research : more details

  • Relevance for practical applications
  • Area that has not yet been explored thoroughly
  • Area that corresponds to your past experience (unless you want to change fields)
  • You must be familiar in general with your research area. Depending on your past experience, this may require much reading. Read surveys and overview articles.
  • In order to identify your research topic, you have to look around (within the research area) to find a problem that has not yet been solved, or for which the existing solutions could be improved. For this purpose, you have to read more detailed papers. When you think, you have found a good research topic, you have to study all literature related to the problem at hand. This requires much readings. Look in good journals (e.g. ACM or IEEE Transactions) and good conferences (ACM and IFIP conferences are often better than IEEE conferences – check the acceptance rate of conferences – specialized conferences usually have more interesting papers than general-purpose conferences).
  • Google is a very interesting search tool, in particular, if you know the title of the paper you are looking for. I think it is better not to rely simply on Google to do your literature search. Better: identify the relevant GOOD journals and conferences in the area and look through the published articles – read the abstract of those that have an interesting title in relation to your interest – read the whole article superficially if the abstract is promising -  read the article again (thoroughly) if you are interested in the details.
  • Talk to other people knowledgeable in your research area about your readings and your questions; contact the author of a paper to get a copy, or if you have questions after reading the paper in detail.
  • You may find that the problem that you had chosen is already mostly solved. Maybe it would be better for you to find another topic. Go on reading.
  • In the process of doing this, you may find that there is no good survey paper on the area yet. You may write such a paper.
  • Such a “good idea” comes mostly during the reading of some interesting paper, or by noting that for a given situation described in one paper, an approach presented in another paper may be useful. Most of the time, these “good ideas” do not lead to big results, but only to small improvements. Sometimes, during working on such small improvements, some further “good idea” may appear which may lead to more important “improvements”. Then you have to show that your idea “works” (see below).
  • There is no recipee that always leads to a "good idea". You have to be inventive. It also helps to have a critical attitude towards the paper you are reading. Maybe it is not as simple as they say ??
  • In order to check whether a “new idea” works, you should always try it out with some small examples first, and then some more complex one – if possible an example that covers all aspects of the problem.
  • To convince yourself and others that your approach is interesting in practice, it is very useful to apply it to some realistic case study (this may be a prototype implementation or an extensive simulation study)
  • Which of the above three points is more relevant for showing that your idea works depends on the nature of your problem. Often all of these points may be pursued in parallel.
  • The research work under point (4) should be pursued such that at the end a systematic comparison with all other known approaches can be established. Again, a prerequisite is extensive reading in order to be familiar with the existing literature on the topic.
  • How to write a good research paper is addressed below.
  • You may ask the question: Is the number of papers published important for your career ? - In Canada, it is generally considered that it is better to have a few papers in respected journals and conferences than to have many papers in journals and conferences of lower quality. There is no serious research organization that simply counts the number of papers published.

How to write a good research paper

There are a number of interesting articles on the Internet about this topic. I made copies of those articles I found most interesting (among those that I found with Google).

  • Tips for writing technical papers (by Jennifer Widom) – original :  Good remarks about the different parts that should be included in a paper.
  • Writing technical articles (by Henning Schulzrinne) – original :  This is a good overview of the important points. It also contains many links to related documents (in particular, see the list at the end).
  • How to write a paper (by Mike Ashby) – original : A Powerpoint presentation on how to write a paper is several steps. It includes some nice diagrams, tables, and sketches that provide examples of what should be produced during the different steps.
  • How to write a technical paper (by Andrew A. Chien) – original : Explains how to write a technical paper in 5 steps
  • Writing a technical paper (by Michael Ernst) – original : These are some more good tips for writing technical papers.
  • … and here some other related topics
  • Choosing a venue: conference or journal (by Michael Ernst) – original
  • Making a technical poster (by Michael Ernst) – original
  • Reviewing a technical paper (with several links) – original

Introduction to Research in Computer Science

Prerequisite: Computer Science 40 and Computer Science 32; consent of instructor.

Defining a CS research problem, finding and reading technical papers, oral communication, technical writing, and independent learning. Course participants work in teams as they apprentice with a CS research group to propose an original research problem and write a research proposal.

National Academies Press: OpenBook

Information Technology and the Conduct of Research: The User's View (1989)

Chapter: the use of information technology in research.

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

The Use of ~formadon Technology in Research n this chapter we examine the effect of information technology on the conduct of research. New technologies offer new opportunities, although pervasive use of computers in research has not come about without problems. Some of these problems are technological, some financial. Underlying many of them are complex institutional and behavioral constraints. Nearly five decades ago, the first programmable, electronic, digital computer was switched on. That day science acquired a tool that at first simply facilitated research, then began to change the way research was done. Today these changes continue, and now amount to a revolution. Electronic digital computers at first simply replaced earlier technologies. Researchers used computers to do arithmetic calculations previously done with paper and pencil, slide rules, abacuses, or roomfuls of people running mechan- ical calculators. Benefits offered by the earliest computers were more quantitative than qualitative; bigger computations could be done faster, with greater reliabil- ity, and perhaps more cheaply. But computers were large, expensive, required technically expert operators and programmers, and consequently were accessi- ble only to a relatively small fraction of scientists and engineers. One human generation and several computer generations later, with the advent of the integrated circuit (the semiconductor "chip"), computational speed increased by a factor of 1 trillion, computational cost decreased by a factor of 10 million, and the smallest useful calculator went from the size of a typewriter to the size of a wristwatch. At present, personal computers selling for a few thousand dollars can put significant computing power on the desk of every scientist. Meanwhile, advances in the software through which people interact with and instruct computers have made computers potentially accessible to people with no specific training in computation. More recently, computer technology has joined telecommunications technology to create a new entity, 11

12 INFORMATION TECHNOLOGY AND THE CONDUCT OF RESEARCH Bodices supplement or expand points in the text: the first two below deal with specific disciplines. "information technology." Information technology has done much to remove from the researcher the constraints of speed, cost, and distance. On the whole, information technology has led to improvements in research. New avenues for scientific exploration have opened. The amount of data that can be analyzed has expanded, as has the complexity of analyses. And researchers can collaborate more widely and efficiently. Different scientific disciplines use information technology differently. Uses vain according to the phenomena the discipline studies and the rate at which the discipline obtains information. In such disciplines as high energy physics, neurobiology, chemistry, or materials science, experiments generate millions of observations per second, and these must be screened and recorded as they happen. For these disciplines, computers that can handle large amounts of information quickly are essential and have made possible research that was previously impractical. Other disciplines, such as economics, psychology, or public health, gather data on events that accumulate slowly over relatively long periods of time. These disciplines also need computers with large capacities, but do not need the capability to react in "real time." Most disciplines use informa- tion technology in ways that fall somewhere in the range between these two extremes. HIGH ENERGY PHYSICS: SCIENCE DRIVES THE LEADING EDGE OF INFORMATION TECHNOLOGY An example helps to illustrate the direction in which many disciplines are moving: high energy physics could not be done without information technology, and offers an ex- treme example of the trends for computing and communication needs in many scientific disciplines. Most high energy physicists work on the same set of questions: what is the behavior of the most elementary particles, and what is the nature of the fundamental forces be- tween them? Their experiments are con- ducted in machines called accelerators, de- vices that produce beams of protons, elec- trons, or other particles that are accelerated to high speeds and huge energies. There are two types of accelerators: those in which two beams of particles are made to collide with each other (colliders), and those in which a beam hits stationary targets. Physicists then reconstruct the collision to find new phe nomena. Remarkable results have emerged from high energy physics experiments conducted over the past two decades. For instance, a Nobel prize-winning experiment carried out at the proton-antiproton collider at the Euro- pean Center for Nuclear Research (CERN) in Switzerland, discovered two new particles known as the W and the Z. Their existence had been predicted by a theory claiming that the weak and electromagnetic forces, seem- ingly unrelated at low energy levels, were in fact manifestations of a single force, called the electroweak interaction, which would ap- pear at sufficiently high energies. This discov- ery is a significant step toward the descrip- tion of all known interactions-gravity, elec- tromagnetism, and the strong (nuclear) and weak (radioactive decay) forcers manifes- tations of a single unifying force. The process by which some tens of these

13 The Panel recognizes the diversity in research methods, and differences in needs for information technology. But the needs of researchers show sufficient commonalities across research fields to make a search for common solutions worthwhile. THE CONDUCT OF RESEARCH The everyday work of a researcher involves such activities as writing proposals, developing theoretical models, designing experiments and collecting data, ana- lyzing data, communicating with colleagues, studying research literature, rev~ew- ing colleagues' work, and writing articles. Information technology has had important effects on all these activities, and more change is in the offing. To illustrate these effects, we examine three particular aspects of research: data collection and analysis, communications and collaboration, and information storage and retrieval. In each area, we discuss how researchers currently use information technology and what difficulties they encounter. In a final part of this section, we discuss new technological opportunities and their implications for the conduct of research. new W and Z particles were isolated from millions of collision events in the CERN accel- erator offers a striking illustration of the dependence of high energy physics on the most advanced aspects of information tech- nology. Three steps are involved. First, data are acquired in real time as the experiment progresses; second, the data obtained are transformed into flight paths, from which the particles making the paths are identified; and third, the event itself is reconstructed, and those few events exhibiting the very special characteristics of the new phenomenon are identified. In each of these steps computers are vital: to trigger the identification of inter- esting events; to establish particle tracks from the data; and to carry out analysis and interpretation. In the future, high energy physicists will demand more from information technology than it can now deliver. Proposed new parti- cle accelerators, such as the Superconduct- ing Super Collider (SSC), are expected to pro duce several million collisions every second, of which only one or two collisions a second can be recorded. Selecting this tiny fraction of the produced events in a manner that does not throw away other interesting data is a tremendous challenge. It is hoped that "farms" of dedicated microprocessors might be able to examine tens of thousands of collision events per second, so that sophisti- cated selection mechanisms can screen all collisions and select the veIy few that are to be recorded. The computer programs that need to be developed for these tasks are of unprecedented size and complexity, and will challenge the capabilities of both the physi- cists programming them and the information technology software support available to the programmers. Even the small fraction of recorded events will result in some ten million collisions to be analyzed in a year. Processing one year's worth of saved data from the SSC would take a modern mid-sized computer 500 years; THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

14 INFORMATION DATA COLLECTION AND ANALYSIS TECHNOLOGY AND THE CONDUCT Current Use Collecting and analyzing data with computers are among the OF RESEARCH most widespread uses of information technology in research. Computer hard ware for these purposes comes in all sizes, ranging from personal computers to microprocessors dedicated to specific instrumentational tasks, large mainframe computers sensing a university campus or research facility, and supercomputers. Computer software ranges from general-pu~pose programs that compute nu meric functions or conduct statistical analyses to specialized applications of all sorts. The Panel has identified five trends in the use of information technology in data collection and analysis: · Increased use of computers for research. This trend coincides with large and continued increases in the speed and power of computers and corresponding declines in their costs. · Dramatic increases in the amount of information researchers can store and analyze. For example, researchers can now process and manipulate observations in a database consisting of 18 years x 3,400 individuals x 1,000 variables per individual for each year, create sets of relationships among these observations, obviously, a faster processing rate is re- quired. Although no computer currently on the market would handle this load in reason- able time, existing plans suggest that, by the time it is needed, some combination of dedi- cated microprocessors and large mainframe systems will be available. High energy physicists are also highly de- pendent on networks. Accelerators are lo- cated in only seven main laboratories in the United States, Switzerland, West Germany, the Soviet Union, and Japan; the physicists who use them are located in many hundreds of universities and institutions scattered around the world. Almost every high energy experiment, large or small, is a result of international collaboration: for instance, one detector installed around one of the collision points of the accelerator at the Fermi Na- tional Laboratory is run by a collaboration of four foreign and thirteen U.S. institutions, involving some 200 physicists. Physicists at several institutions designed different parts of the detector; since the detector has to work as an integrated apparatus, the physicists had to coordinate their work closely. Different physicists are also interested in different as- pects of the experiment, and subsequent analysis of the data depends crucially on adequate networking. Future networking needs for high energy physics involve very high transmission speeds (as high as 10 megabits per second) between laboratories, with provision for ex- change of collision event files, graphics, and video conferencing. Present long distance communication links are limited to lower transmission speeds (typically, 56 kilobits per second); each university physics group could use a 1.5 megabit per second line for its own research needs. The provision of these facil- ities would be of enormous benefit to univer- sity-based physicists and students who can- not travel frequently to accelerator sites.

15 and then subject the data to complex statistical analyses, all at a cost of less than $100. Two decades ago, that kind of analysis could not have been conducted, and a much simpler analysis would have cost at least ten times as much. · The creation of new families of instruments in which computer control and data processing are at the core of observation. For example, in new telescopes, image-matching programs on specialized computers align small mirrors to produce the equivalent light-gathering power of much larger telescopes with a single mirror. For instruments such as radio-telescope interferometers, the computer integrates data from instruments that are miles apart. For computer- assisted tomographic scanners, the computer integrates and converts masses of data into three-dimensional images of the body. · Increased communication among researchers, resulting from the prolifera- tion of computer networks dedicated to research, from a handful in the early 1970s to over 100 nationwide at present. Different networks connect different communities. Biologists, high energy physicists, magnetic fusion physicists, and computer scientists each have their own network; oceanographers, space scien- tists, and meteorologists are also linked together. Networks also connect re- searchers with one another regionally; an example is NYSERNET, the New York State Education and Research Network. Researchers with defense agency con- tracts are linked with one network, as are scientists working under contract to the National Aeronautics and Space Administration (NASA). Such networks allow data collection and analysis to be done remotely, and data to be shared among colleagues. · Increasing availability of software "packages" for standard research activities. Robust, standardized software packages allow researchers to do statistical analyses of their data, compute complex mathematical functions, simplify mathematical expressions, maintain large databases, and design everything from circuits to factories. Many of these packages are commercial products, with high-quality documentation, service, and periodic updates. Others are freely shared software of use to a specialized community without the costs or benefits of commercial software. One example illustrating several of the above trends is a system that geophys- icists have set up to predict earthquakes more accurately. Networks of seismo- graphs cover the western United States. One such network in northern California is called CALNET. Information from the 264 seismographs in CALNET goes to a special-purpose computer called the real-time picker. The software on the real-time picker looks at data as they come in and identifies exceptional events: patterns that indicate a coming earthquake. Then it notifies scientists of the events by telephone and sends graphics displays of locations and magnitudes, all within minutes. Difficulties Encountered The difficulties that researchers encounter using information technology to collect and analyze data vary in importance depend- ing on the particular discipline. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

16 INFORMATION One difficulty is uneven access to computing resources. Information technol TECHNOLOGY AD of iS not equally accessible to ail researchers who could benefit from its use, THE CONDUCT even though broadening access is a continuing focus of institutions and Finding OF RESEARCH agencies. To take an example from the field of statistics: according to a 1986 report on the Workshop on the Use of Computers in Statistical Research, sponsored by The Institute for Mathematical Statistics, "...the quality and quantity of computational resources available to researchers today varies dra matically from department to department . . . Perceived needs appear to vary just as dramatically.... tWhile] departments that already have significant computer hardware feel a strong need for operating support, . . . departments that do not have their own computational resources feel an equally strong need for hard ware." (Eddy, 1986, p. iii.) Exclusion from resources happens for a variety of reasons, all reducible in the end to financial constraints. Not all academic or research institutions have links to networks; in addition, access to networks can be expensive, so not everyone who wants it can afford it. In some cases, since access to networks often mediates access to resources such as supercomputers, exclusion from networks can mean exclusion from advanced computing. See box on software, One of the most frustrating difficulties for researchers is finding the right page 18. software. Software that is commercially available is often unsuited to the specialized needs of the researcher. In those fields in which industry has an interest, however, commercial software is being developed in response to a perceived market. Software could be custom designed for the researcher, but relatively few researchers pay directly for software development, partly because research grants often cannot be used to support it. Consequently, most research RESEARCH MATHEMATICS AND COMPUTATION Computation and theory in mathematics are symbiotic processes. Machine computing power has matured to the point where math- ematical problems too complicated to be understood analytically can be computed and observed. Phenomena have been observed for the first time that have initiated entirely new theoretical investigations. The theory of the chaotic behavior of dynamic systems de- pends fundamentally on numerical simula- tions; the concept of a "strange attractor" was formulated to understand the results of a series of numerical computations. Recent advances in the theory of knots have relied on algebraic computations carried out on com- puters. These advances can be directly ap- plied to such important topics as understand- ing the folding of DNA molecules. In the field of geometry, numerical simulation has been used recently to discover new surfaces whose analytic form was too difficult to analyze directly. The simulations were understood by the use of computer graphics, and led to the explicit construction of infinite families of new examples. The modern computer is the first labora- tory instrument in the history of mathemat- ics. Not only is it being used increasingly for research in pure mathematics, but, equally important, the prevalence of scientific com- puting in other fields has provided the me

17 ers, although they are not often skilled software creators, develop their own software with the help of graduate students. The result meets researchers' minimum needs but typically lacks documentation and is designed for one purpose only. Such software is not Filly understood by any one person, making it difficult to maintain or transport to other computing environments. This means that the software often cannot be used for related projects, and the scientific community wastes time, effort, and money duplicating one another's efforts. In sections to follow we examine how this problem is being addressed by profes- sional associations, nonprofit groups, and corporations. Some disciplines are limited by available computer power because computers needed are not on the market. Some contemplated calculations in theoretical physics, quantum chemistry, or molecular dynamics, for example, could use computers with much greater capacity than any even on the drawing boards. In other cases, data gathering is limited by the hardware presently available. Most commercial computers are not designed to accommodate hardware and pro- grams that select out interesting information from observational data, and scientists who want such computers must build them. Another difficulty researchers encounter is in transmitting data over networks at high speed. For researchers such as global geophysicists who use data collected by satellite, a large enough volume of information can be sent in a short enough time, but transmission is unreliable. Researchers often encounter delays and incur extra costs to compensate for "noise" on high-speed networks. Technological solutions such as optical fiber and error-correcting coding are currently expensive to install and implement and are often unavailable in certain geographic regions or for certain applications. dium for communication between the math- ematician and the physical scientist. Here modern graphics plays a critical role. This interaction is particularly strong in materials science, where the behavior of liquid crystals and the shapes of complex polymers are being understood through a combination of theoretical and computational advances. In spite of all this, mathematics has been one of the last scientific disciplines to be computerized. More than other fields, it lacks instrumentation and training. This prevents the mathematician from using modern com- puting hardware and techniques in attacking research problems, and at the same time isolates him/her from productive communi- cation with scientific colleagues. Of course, mathematics is an important part of the foundation and intellectual basis of most of the methods that underlie all scientific use of computational machinery. To use today's high-speed computing ma- chines, new techniques have been devised. The need for new techniques is providing a serious challenge to the applied mathemati- cian, and has placed new and difficult prob- lems on the desk of the theorist; algorithms themselves have become an object of serious investigation. Their refinement and improve- ment have become at least as important to the speed and utility of high-speed comput- ing as the improvement of hardware. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

18 IN1?OElMATION COMMUNICATION AND COLI^BORATION AMONG RESEARCHERS TECHNOLOGY AND THE CONDUCT Current Use Researchers cannot work vv~thout access to collaborators, to OF RESEARCH instruments, to information sources and, sometimes, to distant computers. Computers and communication networks are increasingly necessary for that access. Three technologies are concerned with communications and collabora tion: word processing, electronic mail, and networks. Word processing and electronic mail are arguably the most pervasive of all the routine uses of computers in research communication. Electronic mail sending text from one computer user to another over the networks is replacing written See box on document and telephone communication among many communities of scientists, and is processing, page 19. changing the ways in which these communities are defined. Large, collaborative projects, such as oceanographic voyages, use electronic mail to organize and schedule experiments, coordinate equipment arrivals, and handle other logistical IF KITCHEN APPLIANCES WERE LIKE SOFTWARE If kitchen appliances were like programs, they would all look alike sitting on the counter. They would all be gray, featureless boxes, into which one places the food to be processed. The door to the box, like the box itself, is completely opaque. On the outside of each box is a general description of what the box does. For in- stance, one box might say: "Makes anything a meal"; another: "Cooks perfectly every time"; another: "Never more than 100 calories a serving." You can never be exactly sure what happens to food when it is placed in these boxes. They don't work with the door open, and the 200-page user's manual doesn't give any details. Working in a kitchen would be a matter of becoming familiar with the idiosyncrasies of a small number of these boxes and then laying to get done what you really want done using them. For instance, if you want a fried- egg sandwich, you might try the "Makes any- thing a meal" box, since a sandwich is a sort of meal. But because you know from past experience that this box leaves everything coated with grease, you use the "Never more than 100 calories" box to postprocess the output. And so on. The result is never what you really want, but it is all you can do. You aren't allowed to look inside the boxes to help you do what you really want to do. Each box is sealed in epoxy. No one can break the seal. If the box seems not to be working right, there is nothing you can do. Even calling the manufacturer is no help, because the box is not under warranty to be fit for any particular purpose. The manufacturers do have help lines, but not for help with broken boxe~rather to help you figure out how to use functioning boxes. But don't try to ask how your box works. The help-line people don't know, or if they do, they won't tell you. Several times a year you get a letter from the manufacturer telling you to ship them your old box and they will send you a new one. If you do so, you find yourself with a shinier box, which does whatever it did before a little faster, or perhaps it does a little more but since you were never sure what it did before, you cannot be sure it's better now. SOURCE: Mark Weiser, 1987. "Source Code," IEEE Com puter, Z0(~): 6~73.

19 details. With the advent of electronic publishing tools that help lay out and integrate text, graphics, and pictures, mail systems that allow interchange of complex documents will become essential. Networks range in size from small networks that connect users in a certain geographic area, to national and international networks. Scientists at different sites increasingly use networks for conversations by electronic mail and for repeated exchanges of text and data files. The Panel has identified two major trends in the way information technology is changing collaboration and communication in scientific research: · Information can be shared more and more quickly. For example, one of the first actions of the federal government after the discovery of the new high- temperature superconductors was to fund, through the Department of Energy's Ames Laboratory, the creation of a superconductivity information exchange. The laboratory publishes a biweekly newsletter on advances in high-temperature superconductivity research, available in both paper and electronic forms; the electronic version is sent out to some 250 researchers. · Researchers are making new collaborative arrangements. The technology of networks provides increased convenience and faster turnaround times often several completed message exchanges in one day. For shorter messages, special software allows real-time exchanges. DOCUMENT PROCESSING [An] area of significant change is document processing. This began in the 1960s with a few simple programs that would format typed text. In the context of UNIX* in the 1970s, these ideas led to a new generation of document processing programs and lan are constructing systems, such as the POST SCRIPT protocols, embodying these ideas. The NSF-sponsored EXPRES project, at the University of Michigan and Carnegie Mellon University, illustrates a serious effort to de velop a standard method of exchanging full scientific documents by network. Low-cost laser printers now make advanced document guages, such as SCRIBE and the UNIX-based preparation and printing facilities available to tools troths, eqn, tbl, and pie. The quintessence many people with workstations and personal of these ideas are Knuth's TeX and computers. It is now possible for everyone to METAEiONT systems, which have begun to submit high-quality, camera-ready copy di revolutionize the world's printing industry. rectly to publishers, thus speeding the publi In workstations, these ideas have produced cation of new results; however, it is no longer WYSIWYG (w~zzy-wig, or "what you see true Mat a well-formatted document can be iswhatyouget")systemsthatdisplayformat- trusted to have undergone a careful review ted text exactly as it will appear in print. and editing before being printed. International standards organizations are considering languages for describing docu ments, and some software manufacturers SOURCE: Peter J. Denning, 1987, Position Paper: Informa tion Technology in Computing. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH See box on collaboration, page 20.

20 INFORMATION AS Lederberg noted a decade ago (Lederberg, 1978), digital communication TECHNOLOGY AND allows scientists to define collegial relationships along the lines of specialized THE CONDUCT interests rather than spatial location. This is immensely beneficial to science as OF RESEARCH a whole, but causes some consternation among administrators who find more loyal to disciplines than to institutions. Technologies in the process of development show the networks' remarkable potential. Multimedia mail allows researchers to send a combination of still images, video, sound, and text. Teleconferencing provides simultaneous elec tronic links among several groups. Electronic chalkboards allow researchers to draw on their chalkboard and have the drawing appear on their computer and on the computers of collaborators across the country. Directory services, or "namese~vers," supply directories of the names and network addresses of users, processes, and resources on a given network or on a series of connected networks. Program distribution services include the supply of mathematical software to subscribers. A spectacular new technology is represented in the Metal Oxide Semiconductor Implementation System (MOSIS), a service that contracts for the manufacture of very large-scale integrated (VLSI) chips from circuit diagrams pictured on a subscriber's screen. Fabrication time is often less than 30 days. In one notable example, the researchers designing a radiotelescope in Australia designed custom chips for controlling the telescope. MOSIS returned the chips in a matter of days; the normal manufacturing process would have taken months and would have delayed the development of the instrument considerably. NEW FORMS OF COLLABORATION THROUGH THE NETWORKS The development of COMMON LISP (a pro ~arnming language) would most probably not have been possible without the electronic message system provided by ARPANET, the Department of Defense's Advanced Research Projects Agency network. Design decisions were made on several hundred distinct points, for the most part by consensus, and by simple majority vote when necessary. Ex cept for two one-day face-to-face meetings, all of the language design and discussion was done through the ARPANET message system, which permitted effortless dissemination of messages to dozens of people, and several interchanges per day. The message system also provided auto- matic archiving of the entire discussion, which has proved invaluable in preparation of this reference manual. Over the course of thirty months, approximately 3000 messages were sent (an average of three per day), ranging in length from one line to twenty pages... It would have been substantially more difficult to have conducted this discus- sion by any other means, and would have required much more time. SOURCE: Guy Steele, 1984. COMMON LISP: The Lan guage. Bedford, MA: Digital Press, pp. xi-xii. Reprinted with permission. Copyright Digital Press/Digital Equip- ment Corporation.

21 To share complex information (such as satellite images) over the networks, researchers will need to be able to send entire pictures in a few seconds. One technique that is likely to receive more attention in the future is data compres- sion, which removes redundant information and converts data and images to more compact forms that require less time to transmit. Among the most important of potential applications of information technology is the emergence of a truly national research network-that is, a set of connec- tions, or gateways, between networks to which every researcher has access. The National Science Foundation has announced its intention to serve as a lead agency in the development of such a network, beginning with a backbone, called NSFNET, that links the NSF-supported supercomputing centers, and widening to include other existing networks. Widespread access to networks will also offer much more than just commu- nications links. They can become what the network serving the molecular biology community aims to be: a full-fledged information system. Difficulties Encountered The principal difficulty with communicating across research communities via electronic mail and file transfer technologies is incompatibility. The networks were formed independently, evolved over many years, and are now numerous. Consequently, networks use different protocols, that is, different conventions for packaging data or text for transmission, for locating an appropriate route from sender to receiver over the physical network, and for signaling the start and stop of a message. For example, a physicist on the High Energy Physics network (HEPNET) trying to send data to a physicist on one of the regional networks would first have to ask "What network are you on?"; "How do I address you?"; and "What form do you want the information in?" In the gateway between two networks, the protocols of the first network must be removed from the message and the protocols for the second added. Under heavy traffic loads, the gateways can become bottlenecks. As a result, navigating from one network to a researcher on another is time-consuming, tiresome, and often unreliable; navigating over two networks to a researcher on a third is prohibitively complex. Text can frequently be moved from one word processing system to another only with significant loss of formatting information including the control of spacing, underlining, margins, or indentations. Graphics can only rarely be included with text. Such issues of compatibility may delay the expansion of electronic publishing as well as electronic proposal submission and review the goals of the National Science Foundation's EXPRES project. The issues are summarized succinctly by Denning: "Most word processors are inadequate for scientific needs: they cannot handle graphs, illustrations, math- ematics and layout, and myriad file formats make exchange extremely difficult. With so many experts and so much competition in the market, it is hard to win agreement on standards. There is virtually no electronic support for the remain- der of the process of scientific publication submission, review, publication, and THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

22 INFORMATION distribution. These issues can be expected to be resolved over the next fewyears, TECHNOLOGY AND as document interchange formats are adopted by standards organizations and THE CONDUCT incorporated into software revisions and equipment upgrades. However, the OF RESEARCH transition process will not be painless" (Denning, 1987, pp. 2~27). In addition, some networks limit use under certain circumstances; for in stance, one network bars communication among researchers at industrial laboratories. The fear is that corporations would use a research network for commercial profit or even for sales or marketing. The Panel believes such fear is misplaced and that networks should be open for all research communication. Bodices on pages 22-27 On the whole, the management of the networks is anarchic. Networks operate examine network use not as though they were a service vital to the health of the nation's research alternatives. community but as small fiefdoms, each with strong disciplinary direction, with little incentive to collaborate. The National Science Foundation has taken an early leadership role, with such initiatives as NSFNET, which addresses many of the current networking problems, and the EXPRES project, which establishes stan- dards for the electronic exchange of complex documents. Such efforts to provide integration and leadership are vital to increased research productivity. FROM A NETWORK TO AN INFORMATION RESOURCE PROTOTYPE: BIONET BIONET is a nonprofit resource for molec- ular biology computing that provides access to software, recent versions of databases rel- evant to molecular biology, and electronic communications facilities. Work is in prog- ress to expand BIONET as a logical network reaching molecular biologists throughout the research community worldwide. Many exist- ing physical networks are in use by molecu- lar biologists, and it is BIONET's aim to utilize them all. BIONET is working on plans to provide molecular biologists with access to one or more supercomputers or parallel processing resources. Special programs will be developed to provide molecular biologists with an easy interface to submit supercom- puter jobs. Especially active are the METHODS-AND- REAGENTS bulletin board (for requesting in- formation on lab protocols and/or experi mental reagents) and the RESEARCH-NEWS bulletin board, which has become a forum for posting interesting scientific develop- ments and also a place where scientists can introduce their labs and research interests to the rest of the electronic community. Bulletin boards have been instituted for the GenBank and EMBL nucleic acid sequence databases. Copies of messages on these bulletin boards are forwarded to the database staff members for their attention. These bulletin boards serve as a medium for discussing issues re- lating to the databases and as a place where users of the databases can obtain assistance. Along these same lines BIONET has developed the GENPUB program that facilitates submis- sion of sequence data and author-entered annotations in computer-readable form di- rectly to GenBank and EMBL via the elec- tronic mail network. The journals CELL and CABIOS have estab- lished accounts on BIONET and the Journal of Biological Chemistry and several others

23 INFORMATION STORAGE AND RETRIEVAL Current Uses How information is stored determines how accessible it is. Scientific texts are generally stored in print (in the jargon, in hard copy) and are accessible through the indices and catalogs of a library. Some texts, along with programs and data, however, are stored electronically on disks or magnetic tapes to be run in computers-and are generally more easily accessible. In addition, collections of data, known as databases, are sometimes stored in a central location. In general, electronic storage of information holds enormous advantages: it can be stored economically, found quickly without going to another location, and moved easily. One kind of database holds factual scientific data. The Chemical Abstracts Service, for example, has a library of the molecular structures of all chemical substances reported in the literature since 1961. GenBank is a library of known genetic sequences. Both the National Aeronautics and Space Administration and the National Oceanic and Atmospheric Administration have thousands of tapes holding data on space and the earth and atmosphere. will also soon be on board. Several journals have indicated an interest in publishing re- search abstracts on BIONET in advance of hardcopy articles. Annotated examples of program usage have been included into the HELP ME system. The examples, formatted to be suitable for print- ing out as a manual, cover the major uses of the BIONET software for data entry, gel man- agement, sequence, structure and restriction site analysis, cloning simulations, database searches, and sequence similarities and align- ments. A manual of standard molecular biol- ogy lab protocols has also been added to HELP ME for users to reference. One of BIONET's major goals is to serve as a focus for the development and sharing of new software tools. Towards achieving this goal, BIONET has made available to the com- munity a wide variety of important computer programs donated by a number of software developers. A collaborative effort has oc- curred between the BIONET staff and the software authors to expand the usefulness of important software by making it compatible with a number of hardware and user com- munity constraints. BIONET provides an increasing number of databases online: lists of restriction enzymes; a bank of common cloning vector restriction maps and complete vector sequences; a da- tabase of regular expressions derived from published consensus sequences; the search- able full text of a recent revision of "Genetic Variations of Drosophila melanogaster" by Dan L. Lindsley and E.H. Grell (the Drosophila "Red Books. Some of these can be used as input to search programs. BIONET invites curators of genetic and physical genome maps to use this resource for the collection, maintenance, and distribution of Weir data- bases. SOURCE: Roode et al., 1988. "New Developments at BlO- NET," Nucleic Acids Research, 16(5):1857-1859. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

24 INFORMATION A second kind of database, a reference database, stores information on the TECHNOLOGY AND literature of the sciences. For example, Chemical Abstracts Selvice has abstracted THE CONDUCT all articles published in journals of chemistry since 1970 and makes the abstracts OF RESEARCH available electronically. The National Lib racy of Medicine operates services that index, abstract, and search the literature database (known as MEDLARS). In addition, it distributes copies of the database for use on local computers and has developed a communications package, called GRATEFUL MED, that simplifies searching the major MEDLARS files over six million records through 1987. In addition to biomedicine and clinical medicine, the National Lib raIy of Medicine partially covers the literature of the disciplines of population control, bioethics, nursing, health administration, and chemistry. One of its most important databases, for instance, is TOXLINE, which references the chemical analysis of toxins. Information search services have grown up around these and other databases, including a number of commercial ones, and now constitute a substantial industry. A database, taken together with the procedures for indexing, cataloging, and searching it, makes up an information management system. Some potentials of information management systems have been predicted for years, beginning with BIRTH OF A NETWORK: A HISTORY OF BITNET (EXCERPTED) BITNET (Because It's Time NETwork) began as a single leased telephone line between the computer centers of The City University of New York (CUNY) and Yale University. It has developed into an international network of computer systems at over 800 institutions worldwide. Because membership is not re stricted by disciplinary specialty or funding ability, BITNET plays a unique role in foster ing the use of computer networking for scholarly and administrative communication both nationally and internationally. In 1981, CUNY and Yale had been using internal telecommunications networks to link computers of their own. The New York/ New Haven link allowed the same exchanges to take place between two universities. The founders of BITNET Ira Fuchs, then a CUNY vice chancellor, and Greydon Freeman, the director of the Yale Computing Center real ized that the fledgling network could be used to share a wide range of data. Furthermore, the ease and power of electronic mail showed new potential for cooperative work among scholars; collective projects could now be undertaken that would have been difficult or impossible if conducted by postal mail or by phone. Fuchs and Freeman approached the direc- tors of other academic computer centers with major IBM installations to invite them to become members of the new network. The plan of shared resources that BITNET offered included two proposals: a) that each institu- tion pay for its own communications link to the network; and b) that each provide facili- ties for at least one new member to connect. Software was used to create a store-and- forward chain of computers in which files, messages, and commands are passed on without charge from site to site to their final destination. BITNET became a transcontinen- tal network in 1982 when the University of California at Berkeley leased its own line to CUNY. Berkeley agreed to allow other Califor

25 Vannevar Bush's MEMEX (Bush, 19451. The box on pages 2029 illustrates a current working information management system that links texts and databases in genetics and medicine. Difficulties Encountered For all disciplines, both factual and reference databases promise to be significant sources of knowledge for basic research. But to keep this promise, a Pandora's box of problems will have to be solved. Difficulties encountered with factual databases, stated succinctly, are: the researcher cannot get access to data; if he can, he cannot read them; if he can read them, he does not know how good they are; and if he finds them good, he cannot merge them with other data. Researchers have difficulty getting access to data stored by other researchers. Such access permits reanalysis and replication, both essential elements of the scientific process. At present, with a few excep- tions, data storage is largely an individual researcher's concern, in line with the tradition that researchers have first rights to their data. The result has been a proliferation of idiosyncratic methods for storing, organizing, and indexing data, with one researcher's data essentially inaccessible to all other researchers. nia institutions to link to the network through its line, in return for some expense sharing. In 1984, IBM agreed to support CUNY and EDUCOM (a nonprofit consortium of col- leges, universities, and other institutions founded in 1964 to facilitate the use and management of information technology) in organizing a centralized source of informa- tion and services to accommodate the grow- ing number of BITNET users. EDUCOM set up a Network Information Center (BITNIC), whose ongoing functions include the han- dling of registration of new members; at the same time, CUNY established a Development and Operations Center (BITDOC), which de- velops tools for the network. BITNET's success (it is now in all fifty states) led to the formation of a worldwide network of computers using the same net- working software: in Europe and the Middle East (EARN, the European Academic Re- search Network), Canada (NetNorth), Japan, Mexico, Chile, and Singapore (all of which are members of BITNET). There is also active interest from other countries in the Far East, Australia and New Zealand, and South Amer- ica. Although political and funding consider- ations have forced their administrative segre- gation, BITNET, EARN, and NetNorth form one topologically interconnected network. Success has also meant some further structuring of what had once been essentially a buddy system. BITNET is now governed by a board of trustees elected by and from its membership. The members of the board each participate in various policy-making committees focusing on network usage, fi- nance and administration, BITNIC services and activities, and technical issues. What be- gan as a simple device for intercampus shar- ing is simple no longer. SOURCE: Holland Cotter, ~988. Birth of a network: A history of BITNET. CUNY/University Computer Center Communications, 14:~-10. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

26 INFORMATION Even if a researcher gets access to a colleague's data, he may not be able to read TECHNOLOGY AND them. The formats with which data are written on magnetic tape-like the THE CONDUCT formats used in word processing systems-vary from researcher to researcher, OF RESEARCH even within disciplines. The same formatting problems prohibit the researcher from merging someone else's data into his own database. In order either to read or to merge another's data, considerable effort must be dedicated to converting tape formats. Finally, when a researcher gets access to and reads another's database, he often has no notion of the quality of the data it contains. A number of proposals (see Branscomb, 1983, National Research Council, 1978) have been made for the creation of what are called evaluated databases, in which data have been verified by independent assessment. In fields such as organizational science or public health, the costs of collecting and storing data are so large that researchers often have to depend on case studies of organizations or communities to test hypotheses. Researchers in these fields have proposed combining data from many surveys into databases of national scope. If differences in research protocols and database formats can be resolved, such national databases can increase the quality and effectiveness of research. THE STUDY PANEL'S EXPERIENCE WITH ITS OWN ELECTRONIC MAIL IS INSTRUCTIVE. Most of the members of the Panel use electronic mail in their professional work; some use it extensively, exchanging as many as seventy messages in one day. At their first meeting, Panel members and staff decided it would be useful to establish electronic com- munication links for the Panel. Using a net- work to which he had access, one of the Panel members devised a distribution-list scheme for the Panel. He designed a system that would allow Panel members to exchange messages or documents easily by naming a common group "address." This group ad- dress would connect everyone by name from their own network. Panel members would not have to remember special codes or routes to other networks, but could use their own familiar network. Also, messages could be sent to one, several, or all of the Panel mem bers at once. Between December 1986 and March 1988, nearly 2,000 messages went out using the Panel's special electronic group address. In line with what has been found in systematic research on electronic mail by ad hoc task groups (Finholt, Sproull and Kiesler, 1987), most of the messages went from study staff managing the project to Panel members. Epically, staff used electronic mail to per- form coordinating and attentional functions, e.g., to structure meetings, to ask Panel members for information or to perform writ- ing tasks, and to provide members with prog- ress reports. In addition, some Panel mem- bers sent mail through other network chan- nels to each other; for instance, two Panel members exchanged electronic mail about computers in the oceanographic community through BITNET, ARPANET, and OMNET. Although previous research and our own

27 The primary difficulty encountered with reference databases is in conducting searches. Most information searches at present are incomplete, cumbersome, inefficient, expensive, and executable only by specialists. Searches are incomplete because databases themselves are incomplete-updating a database is difficult and expensive- and because information is stored in more than one database. Searches are cumbersome and inefficient because different databases are orga- nized according to different principles and cannot readily be searched except by commands specific to each database. Searches are expensive because access is expensive (as much as $300 per hour), because network linkages to the databases impose substantial surcharges, and because the inefficiency of the systems means that searches may have to be repeated. A difficulty common to both scientific and reference databases is a pressing need for new and more compact forms of data storage. Disciplines such as oceanography, meteorology, space sciences, and high energy physics have already gathered so much data that more efficient means of storage are essential; and others are following close behind. One solution seems to lie in optical disk storage, for which various alternative technologies are under development. Currently, these new techniques lack commonly accepted standards. informal observations agree in suggesting that the electronic group mail scheme helped the Panel to work more efficiently, the system was used much less extensively than had been originally envisioned. For example, when delivery of report drafts was crucial, the staff relied on overnight postal mail. Net- work service inadequacies and technical problems are partly to blame; for example, it took months before messages could be sent predictably and reliably to every Panel mem- ber. Because the networks do not facilitate access to service support (comparable to tele- phone system operators, for example), Panel members had to rely on their own resources to remedy any system inefficiencies. For ex- ample, changes to electronic mail addresses in the system could not be made after a few months, so that new addresses had to be added to individual messages. Such technical problems, though by no means insurmountable, were annoying. Anal ysis of a sample of messages received by Panel staff indicates that approximately 10 percent contained some complaint about de- lays, losses of material in transmission, or unavailability of the group mail system. Of- ten, documents were difficult to read because document formatting codes embedded in the document files were removed prior to trans- mission. A message legible on one system might be filled with unintelligible characters when received on another. At considerable difficulty, some Panel members converted messages received electronically to formats they could read using their text editors. Then they would type in their own revisions, which once again would have to be converted to plain formats to be sent back through the networks. This experience suggests that much needs to be done to make internetwork communication by groups more efficient and easier to use. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

28 INFORMATION Another difficulty is that stored data gradually become useless, either because TECHNOLOGY AND the storage media decay or the storage technology itself becomes obsolete. Data THE CONDUCT stored on variant forms of punched cards, on paper tape, or on certain magnetic OF RESEARCH tape formats may be lost due to the lack of reading devices for such media. Even if the devices still exist, some data stored on magnetic tapes will be lost as the See box on satellite- tapes age, unless tapes are copied periodically. Needless to say, such preserva derived data, page 30. lion activities often receive low priority. An important archival activity that also receives a low poorly is the conversion of primary and reference data from pre-computer days into machine readable form. In this regard, the efforts of the Chemical Abstract Service to extend their chemical substance and reference databases are praiseworthy. Another difficulty in storing information is private ownership. By tradition, researchers hold their data privately. In general, they neither submit their data to central archives nor make their data available via computer. Increasingly, however, in disciplines like meteorology and the biomedical sciences, submis sion of primary data to data banks has become accepted as a duty. In the field of economics, the National Science Foundation now requires that data collected with the support of the Economics Program be archived in machine readable HOW A LIBRARY USES COMPUTERS TO ADVANCE PRODUCTIVITY IN SCIENCE In 1985 the William H. Welch Medical Li- brary of the Johns Hopkins University began a unique collaboration with Dr. Victor A. McKusick, the Johns Hopkins University Press, and the National Library of Medicine to develop and maintain an online version of McKusick's book Mendelian Inheritance in Man (known as OMIM, for Online Mendelian Inheritance in Man). While the book contains 3,900 phenotypes (a specific disorder or sub- stance linked to a genetic disease) and up- dates are issued approximately every five years, OMIM currently describes more than 4,300 phenotypes and is updated every week. A gene map is available, keyed to the pheno- type descriptions. Any registered user worldwide can dial up OMIM and search its contents through a simple three-step process: 1) state the search in simple English (e.g., relationship between Duchenne muscular dystrophy and growth deficiency hormone); 2) examine the list of documents, which are presented in ranked order of relevance; and 3) select one or more documents to read in detail. Having selected a document, the searcher can determine through a single keystroke whether the phe- notype has been mapped to a specific chro- mosome. OMIM entries are also searchable in a related file, the Human Gene Mapping Library (HGML) at Yale University. By mid- 1988, researchers will be able to use the same access code to enter and search three related databases: HGML in New Haven, the Jackson Laboratory Mouse Map in Bar Harbor, and OMIM in Baltimore. OMIM is more than an electronic text. It is a dynamic database with many applications. Searching the knowledge base is only one of its uses. It can be used as a working tool. For example, at the last biennial international Human Gene Mapping conference in Paris (September 1987) the results of the commit- tees' deliberations were used to update and regenerate the database each evening. Every

29 form, and that any professional article citing program support be accompanied by a fully documented disk describing the underlying data. In the social sciences, a 1985 report of the National Research Council's Committee on National Statistics recommended both that "sharing data should be a regular practice" and that a "comprehensive reference service for computer-readable social science data should be developed." (Fienberg, Martin, and Straf, 1985.) In addition, peer review of articles and proposals has been constrained by the difficulty of gaining access to the data used for analysis. If writers were required to make their primary data available, reviewers could repeat at least part of the analyses reported. Such review would be more stringent, would demand more effort from reviewers, and raises a number of operational questions that need careful consideration; but it would arguably lead to more careful checking of published results. Underlying the difficulties in information storage and retrieval are problems in the institutional management of resources. Who is to manage, maintain, and update information services? Who is to create and enforce standards? At present the research community has three alternative answers: the federal government, which manages such resources as MEDLINE and the GenBank; professional morning, the conferees had fresh files to consult. This information was available worldwide at the same time. In the future, these conferences can take place electroni- cally as frequently as desired by the scientific community. OMIM is a node in an emerging network of biotechnology databases, data banks, tissue repositories, and electronic journals. In a few years, it may be possible to enter any of these files from any one of the related files. Through this kind of linkage, OMIM may serve as a bridge between the molecular ge neticists and the clinical geneticists. Cur- rently, these databases are primarily text or numerical files. As technology improves and becomes ubiquitous, and as network band- width expands, databases will routinely in- clude visual images and complex graphics. It may also be possible to jump from one point within a file to relevant and related points deep within other files. OMIM and its future manifestations result from collaborative efforts and support from diverse groups. Dr. Victor A. McKusick is the scientific expert responsible for the knowl- edge base; his editorial staff adds new mate- rial and updates the database. The National LibraIy of Medicine developed OMIM as part of its Online Reference Works program. The Welch Medical Library provides the comput- ers, network gateways, database maintenance and management, and user support. finally, the Howard Hughes Medical Institute provides partial support for access, maintenance, and future development of the system. The Welch Library must work closely with both the author and the users to represent research knowledge in ways that best suit the users' purposes. It must be able to respond quickly to the changing needs of the author and the users. It is in a unique position to study and engineer a new kind of knowledge utility. The OMIM effort is part of a project to develop a range of online texts and databases in genetics and internal medicine, carried out in the Library's Laboratory for Applied Re- search in Academic Information. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

30 INFORMATION societies, such as the American Chemical Society, which manages the Chemical TECHNOLOGY AND Abstracts Se~v~ce, and the American Psychological Association, which manages THE CONDUCT Psychological Abstracts; and private for-profit enter ses such as the Institute for OF RESEARCH Scientific Information. NEW OPPORTUNITIES: APPROACHING THE REVOLUTION ASYMPI OTICALLY The information technologies and institutions of the past that revolutionized scholarly communication writing, the mails, the library, the printed book, the encyclopedia, the scientific societies, the telephone-made information more accessible, durable, or portable. The advent of digital information technology and management Continues the revolution, suggesting a vision, still somewhat HANDLING SATELLITE-DERIVED OBSERVATIONAL DATA At present both the National Aeronautics and Space Administration (NASA) and the Na- tional Oceanic and Atmospheric Administra- tion (NOAA) operate earth-orbiting satellites and collect data from them. Both NOAA and NASA store large volumes of primary data from the satellites on digital tape. Both have faced problems, although each organization's problems are different. NOAA, until 1985, had a system that, for purposes of satellite oper- ations, stored environmental satellite data on a Terabit Memory System (TBM). The TBM technology was used from 1978 to 1985, at which time it became obsolete; the more than 1,000 tapes of data collected have been reduced by about 40 percent in transforming most of the useful materials to standard dig- ital tape for storage. NASA has used standard digital tape and disk storage technologies and, since ceding the LAND SAT satellites, has re- corded and saved data from its research earth-observing satellites as needed. Both NASA and NOAA face real problems in making data accessible for scientific analysis. NASA has expended time, effort, and money building a number of satellite data distribu- tion systems that provide digital data archives and a catalog of satellite data holdings, as well as images and graphical analyses produced from satellite data. For example, NASA's Na- tional Space Science Data Center received and filled some 2,500 requests for tapes, films, and prints in the first half of fiscal 1988, and also provided network access to specific databases. NOAA has been largely unable to get financial support for its proposed satellite data management systems. Selection of needed information from among the data available remains a problem. Some pilot sys- tems under development at both agencies succeed in leading the user through a catalog, but fail to contain much valuable new infor- mation and data. Both agencies continue to hold great amounts of environmental satellite data in their permanent archives that are difficult to access, expensive to acquire, and as a result are ignored by many researchers who could benefit from their use. Much re- mains to be done to improve access to im- portant satellite-derived data.

31 incoherent, of new ways of finding, understanding, storing, and communicating information. Some technologies involved in the revolution are · Simulations of natural (or hypothesized) phenomena; · Visualization of phenomena through graphical displays of data; and · Emerging use of knowledge-based systems as "intelligent assistants" in managing and interpreting data. Simulations allow examination of hypotheses that may be untestable under normal conditions. Plasma physicists simulate ways of holding and heating a hot, turbulent plasma until it reaches the temperatures necessary for fission. Cosmol- ogists simulate the growth of galaxies and clusters of galaxies in an infant universe. Engineers simulate the growth of fractures in a metal airplane wing or nuclear reactor. Chemists' simulations may someday be sophisticated enough to screen out unproductive experiments in advance. Drug companies are consid- ering the use of simulations to design drugs for a particular function, for example, a non-addictive drug that also kills pain. In general, simulations extend research- ers' ability to model a system and test the model developed. Visualization techniques turn the results of numerical computations into images. The remarkable ability of the human brain to recognize patterns in pictures allows faster understanding of results in solutions to complex problems, as well as faster ways of interacting with computer systems and models. For USES OF SIMULATION IN ECONOMETRICS Simulation techniques take estimated rela- tionships or numerical models that appear to be consistent with observations of actual be- havior and apply them to problems of pre- dicting the changes induced by time, or of measuring the relationships among sets of economic variables. For example, simulation models have been utilized to study the effects of oil price changes on the rate of inflation, proposed policies regarding labor law, and future interest rates. In addition, exchanges among groups of agents in an economy have been used in dynamic input-output analysis to make inferences about the feasible or likely future course of economic growth in the entire economy or within specific indus- tries or regions. There is a growing interest in investigating the properties of models that represent the workings of firms, markets, and whole econ- omies as nonlinear adaptive systems. Re- cently this has begun to expand the reliance placed by essentially theoretical researchers upon extensive applications of numerical simulation methods. Finally, in both exten- sions of the line of inquiry just noted and in other contexts, direct simulation of stochastic processes via Monte Carlo techniques can be used by economists to gain insights into the properties of stochastic systems that resist deductive techniques due to their (current) analytic intractability. SOURCE: Paul A. David and W. Edward Steinmuller, 1987. Position paper: "The Impact of Information Technology Upon Economic Science," p. 21. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH See box on simulation, below. See box on visualization, pages 32-33.

32 INFORMATION instance, while small molecules have a few dozen atoms and are easy to visualize, TECHNOLOGY AND large molecules, like proteins, have tens of thousands of atoms. A useful physical THE CONDUCT model of the structure of a protein might stand six feet high and cost several OF RESEARCH thousand dollars. Moreover a researcher could not slice a physical model to see how it looks inside; with visualization techniques, he could. Visualization is the single advanced technology most widely mentioned by Panel members and position paper writers. (For a critical analysis of opportunities in visual imaging, see McCormick, DeFanti, and Brown, 1987.) Intelligent assistants can serve as interfaces between the researcher and the computer. Just as computers increase our power to collect, store, filter, and retrieve data, they can also help us reason about the data. Over the last three decades, computer scientists have been developing methods for symbolic infor mation processing or artificial intelligence. While these programs are not fully intelligent in the sense that humans are, they allow computers to solve problems that are not reducible to equations. Artificial intelligence programs have been written for many scientific tasks. These tasks are not expressible in terms of numerical operations alone, and, thus, require symbolic computation. The programs fall into a general class, called expert systems, because they are programmed to reach decisions in much the same way as experts do. Expert systems have been successfully applied to industrial areas such as manufacturing and banking. To date, only a few prototype systems have been written for scientific research. Prototypes include programs that assist in chemical synthesis planning, in planning experiments in molecular genetics, in interpreting mass spectra of organic molecules, in trou VISUALIZATION IN SCIENTIFIC COMPUTING Scientists need an alternative to numbers. A technical reality today and a cognitive im perative tomorrow are the use of images. The ability of scientists to visualize complex com putations and simulations is absolutely es sential to ensure the integrity of analyses, to provoke insights, and to communicate those insights with others. Several visually oriented computer-based technologies already exist today. Some have been exploited by the private sector, and off-the-shelf hardware and software can be purchased; others require new develop ments; and still others open up new research areas. Visualization technology, well inte grated into today's workstation, has found practical application in such areas as product design, electronic publishing, media produc- tion and manufacturing automation. Man- agement has found that visualization tools make their companies more productive, more competitive, and more professional. So far, however, scientists and academics have been largely untouched by this revolu- tion in computing. Secretaries who prepare manuscripts for scientists have better inter- active control and visual feedback with their word processors than scientists have over large computing resources that cost several thousand times as much. Traditionally, scientific problems that re- quired large-scale computing resources needed all the available computational power

33 bleshooting particle beam lines for high energy physicists, and in automated theory formulation in chemistry, physics, and astronomy. The methods needed to assist with complex reasoning tasks are themselves the subject of considerable research in such fields as computer science, cognitive science, and linguistics. Research in these fields, in turn, is producing tools that facilitate research in other disciplines. As these methods are used more widely in the future, some experts predict the conduct of research will change dramatically. Intelligent assistants, in the form of software, can carry out complex planning and interpretation tasks as instructed, leaving humans free to spend time on other tasks. fallen these reasoning programs are coupled to systems with data-gathering capabilities, much of the drudgery associated with research planning, data collection, and analysis can be reduced. Research laboratories and the conduct of research will become even more productive. Men every researcher has intelligent assistants at his/her disposal and when the functions of these assistants are interlinked, science will expand the frontiers of knowledge even more rapidly than it now does. Future technologies will provide other forms of research support. Programs that recognize and follow natural-language commands, like "Give me the data from this file," can simplify interaction between the researcher and computer systems. Spoken-language recognition offers the advantage of hands-free inter- action. Speech production, in which computers generate connected sentences in response to instructions, will, according to one author, lead to a revolutionary expansion in the use of computers in business and office environments (Koening, 1987). A variety of manipulative interfaces of different kinds are under active to perform the analyses or simulations. The ability to visualize results or guide the calcu- lations themselves requires substantially more computing power. Electronic media, such as videotapes, laser disks, optical disks, and floppy disks, are now necessary for the publication and dissemina- tion of mathematical models, processing al- gorithms, computer programs, experimental data, and scientific simulations. The reviewer and the reader will need to test models, evaluate algorithms, and execute programs themselves, interactively, without an author's assistance. Scientific publication needs to be extended to make use of visualization-com- patible media. Reading and writing were only democra- tized in the past 100 years and are the ac cepted communication tools for scientists and engineers today. A new communication tool, visualization, in time will also be democ- ratized and embraced by the great research- ers of the future. The introduction of visualization technol- ogy will profoundly transform the way sci- ence is communicated and will facilitate the commission of large-scale engineering pro- jects. ~sualizabon and science go hand in hand as parkers. No one ever expected Gutenberg to be Shakespeare as well. Perhaps we will not have to wait 150 years this time for the ge- niuses to catch up to me technology. SOURCE: B. H. McCormick, T. A. DeFanti, and M. D. Brown, 1987. Visualization in Scientific Computing (NSF Report). Computer Graphics 21(6). ACM SIGGRAPH: New York, Association for Computing Machinery. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

34 INFORMATION exploration (Foley, 1987). For example, the "data glove" is a glove on a computer TECHNOLOGY AND screen that is an image of a specially-engineered glove on a researcher's hand. THE CONDUCT The data glove follows the motions of the researcher's hand, permitting a OF RESEARCH researcher, for instance, to manipulate a molecule directly on screen. When the data glove is coupled with feedback devices in the researcher's glove, a researcher can "feel" the fit between two molecular structure surfaces. The Panel believes that the mature and emerging information technologies, taken together, suggest a vision of new approaches to scientific and engineering research. The vision focuses on an open infrastructure for research support and communication among researchers, along with the services for maintaining this See bodices on pages 35~1. infrastructure. Below are several examples of parts of the vision and of forms the vision could take. We discuss further steps in the report's final section on recommendations. INSTITUTIONAL AND BEHAVIORAL IMPEDIMENTS TO THE USE OF INFORMATION TECHNOLOGY IN RESEARCH Underlying many of the difficulties we have discussed in the use of information technology in research are institutional and behavioral impediments. We have identified six such impediments that seem to affect research in most or all disciplines: MOLECULAR GRAPHICS The use of interactive computer graphics to gain insight into chemical complexity be- gan in 1964. Interactive graphics is now an integral part of academic and industrial re- search on molecular structures and interac- tions, and the methodology is being success- fully combined with supercomputers to model complex systems such as proteins and DNA. Techniques range from simple black- and-white bit-mapped representations of small molecules for substructure searches and synthetic analyses to the most sophisti- cated 3D color stereographic displays re- quired for advanced work in genetic engi- neering and drug design. The attitude of the research and develop- ment community toward molecular model- ing has changed. What used to be viewed as a sophisticated and expensive way to make pretty pictures for publication is now seen as a valuable tool for the analysis and design of experiments. Molecular graphics comple- ments crystallography, sequencing, chroma- tography, mass spectrometIy, magnetic res- onance, and the other tools of the experimen- talist, and is an experimental tool in its own right. The pharmaceutical industry, espe- cially in the new and flourishing fields of genetic and protein engineering, is increas- ingly using molecular modeling to design modifications to known drugs and to propose new therapeutic agents. SOURCE: B. H. McCormick, T. A. DeFanti, and M. D. Brown, 1987. Visualization in Scientific Computing (NSF Report). computer Graphics 21(6). ACM SIGGRAPH: New York, Association for Computing MachineIy.

35 (1) Issues of costs and cost sharing; (2) The problem of standards; (3) Legal and ethical constraints; (4) Gaps in training and education; (5) Risks of organizational change; and (6) Most fundamental, the absence of an infrastructure for the use of informa- tion technology. Issues of Costs and Cost Sharing Many forces drive developments in information technology and its application to research. The result of these developments is constantly increasing requirements for higher performance computer and communications equipment, making current equipment obsolete. Universities and other research organizations are spending increasing fractions of their budgets on information technology to maintain competitive research facilities and to support computer-related instruction. At a number of private research universities, for example, tuition has increased faster than inflation for a number of years, in part to cover some of these costs. It is unrealistic to rely on such funding sources to cover further cost increases that will be required to build local network infrastructures. A related issue is who will pay for the costs of research computing support. Historically, such costs have been partially recovered by bundling them into charges for use of time-shared mainframe computers. As usage has moved from campus mainframes to other options (ranging from supercomputer centers to workstations and personal computers), this source of revenue has been lost, while the needs for administrative staff and sunnort personnel for consulting, RESEARCH ON INTEGRATED INFORMATION SYSTEMS Nearly a decade ago the Association of American Medical Colleges (AAMC) recog- nized the strategic importance of informa- tion technology to the conduct of biomedical research. In response to a study released by the AAMC in 1982, the National Library of Medicine has supported eleven institutions in efforts to develop strategic plans and proto- types of an Integrated Academic Information Management System (L\IMS). The objective of L\IMS is to develop the institutional informa- tion infrastructure that permits individuals to access information they need for their clinical or research work from any computer terminal, ~ ~, wherever and whenever it is needed, pull that information into a local environment, and read, modify, transform it, or otherwise use it for many different purposes. Several pilot prototype models have emerged. The Baylor Medical College is devel- oping a "virtual notebook," a set of tools for researchers to collect, manipulate, and store data. Georgetown Medical Center has a model called BIOSYNTHESIS that automatically routes a user's query from one database to another. The knowledge sector development of a comprehensive patient management clinical decision support system called HELP is the LAIMS project focus at the University of Utah; and Johns Hopkins University is devel- oping a knowledge workstation. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

36 INFORMATION training, and documentation have continued. Efforts to move research support TECHNOLOGY AND into indirect cost categories have not succeeded as many research institutions THE CONDUCT and universities face caps on indirect cost rates and have no room to accommo OF RESEARCH date new costs. Advances in communications and computing generate new services that require subsidy during the first years of their existence if they are to be successfully tested. This is particularly true of network-related services. Building services into a national network for research will require significant federal, state, and institutional subsidy, which cannot be recovered from user service charges until large-scale connectivity has been achieved and services are mature. Sources for these subsidies must be determined. Methods used for cost recovery can have significant impacts on usage. Two alternatives are to charge users for access to services or to charge users for the amount of service used. Networks such as BITNET have grown substantially in connectivity and use because they have fixed annual institutional charges for membership and connection, but charge no fees for use. Use-insensitive charge methods (often referred to as the library model) are attractive to institutions because costs can be treated as infrastructure costs and are predictable. Charges A REASONABLE MODEL Although the Panel is unaware of anvthin~ precisely like the vision it holds for sharing information, proposals for the newly estab- systems; fished National Center for Biotechnology In formation (NCBI) at the National Library of Medicine may come close. The NCBI pro poses to facilitate easy and effective access to a comprehensive array of information sources that support the molecular biology research community. Many, but not all, of these sources are electronic. They encompass raw data, text, bibliographic information, and graphic rep resentations. Ownership and responsibility for development and maintenance of these sources range from individual researchers to departmental groups, institutes, professional organizations, and federal agencies. Each was designed to serve specific needs and audiences, created in many different hard ware configurations and software applica tions. Consequently, NCBI's mission requires experts in both information technologies and biotechnologies. NCBI staff must · Provide directories to knowledge sources; · Create useful network gateways between · Assist users in using databases effec- tively; · Reduce incompatibilities in retrieval ap- proaches, vocabulary, nomenclature and data structures; · Promote standards for representing in- forrnation that will reduce redundancy and detect inconsistencies or errors; · Provide useful tools for manipulating and displaying data; and · Identify new analytic and descriptive services and systems. Some computing-intensive universities (e.g., Carnegie Mellon University and Brown University) and medical centers (e.g., Johns Hopkins University, the University of Utah, Baylor University, and Duke University) are also attempting to develop instances of the · - vlslon.

37 for amount of use, in contrast, can inhibit usage; a major inhibitor to use of commercial databases for information searches, for instance, is the unpredict- ability of user charges for time spent searching the databases. During the development of network services, it seems desirable to recover costs through fixed access charges wherever possible. The Problem of Standards The development of standards for interconnec- tion makes it possible for every telephone in the world to communicate with every other telephone. The absence of commonly held and implemented standards that would allow computers to communicate with every other com- puter and to access information in an intuitive and consistent way is a major impediment to scholarly communication, to the sharing of information re- sources, and to research productivity. Standards for computer communication are being developed by many groups. The pace of these efforts is painfully slow, however, and the process is intensely political. The technologies are developing faster than our ability to define standards that can make effective use of them. Further, standards that are developed prematurely can inhibit technological progress; standards developed by one group (for example, an equipment vendor) in isolation create islands of users with whom effective communication is difficult or impossible. Development of standards not only improves efficiency but also reduces costs. Open interconnection standards permit competition among vendors, which leads to lowered costs and improved capabilities. Proprietary standards restrict competition and lead to increased costs. Federal government procurement rules have been major sources of pressure on vendors to support open standards. Current mechanisms for reaching agreement on standards need examination and significant improvement. Such examination needs input from user groups, which will have to exert pressure on standards bodies and on the vendors who are major players in the standard-setting process. Legal and Ethical Constraints The primary legal and ethical constraints to wider use of information technology are issues of the confidentiality of, and access to, data. The following discussion will only illustrate these issues; we believe they are too important and too specialized to be adequately addressed in a document as general as this one. In the report's final section, we recommend the establishment of a body that will study and advise on these issues. Information technology has made possible large-scale research using data on human subjects. For the first time, researchers can merge data collected by national surveys with data collected in medical, insurance, or tax records. For instance, in public health research, long-term studies of workers exposed to specific hazards can be carried out by linking health insurance data on costs with Internal Revenue data on subsequent earnings, Social Security data on disability payments, and mortality data, including date and cause of death (Steinwachs, 1987, Position Paper: Information Technology and the Conduct of Public Health THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

38 INFORMATION Research). The scientific potential of such data mergers is enormous; the actual TECHNOLOGY AND use of mergers is small, primarily because of concerns about privacy and THE CONDUCT confidentiality. OF RESEARCH The right to confidentiality of personal information is held strongly in our society. Concerns about the conflict between researchers' needs and citizens' rights have been extensively explored by a number of scientific working groups, under the auspices of both governmental agencies (such as the Census Bureau) and private groups (for example, the National Academy of Sciences). As more information about individuals is collected and cross-linked, fears are raised that determined and technically sophisticated computer experts will be able to identity specific individuals, thus breaching promises of confidentiality and privacy of information. The Census Bureau, in particular, fears that publicity surrounding such breaches of confidentiality will undermine public confidence and inhibit cooperation with the decennial censuses. Although there have been discussions and legislative proposals for outright restrictions on mergers of government survey or census data, a reasonable alternative seems to be to impose severe penalties on researchers who breach confidentiality by making use of information on specific individuals. The issue here, as elsewhere in public policy problems, is the balance of benefits against costs. Does better research balance the risk of compromising perceived funda mental rights to privacy? This is a topic that will need to be debated among both researchers and concerned constituencies in the general public. A related issue is that of acceptable levels of informed consent for human subjects. At present, consent is usually obtained from each respondent to a survey; it is described as informed because the respondent understands what will be done with responses usually, that they will be used only for some specific research project. Data-collecting organizations protect the confidenti THE FAR SIDE OF THE DREAM: THE LIBRARY OF THE FUTURE "Can you imagine that they used to have libraries where the books didn't talk to each other?" [Marvin Minsky, MIT] The libraries of today are warehouses for passive objects. The books and journals sit on shelves, waiting for us to use our intelligence to find them, read them, interpret them, and cause them finally to divulge their stored knowledge. "Electronic" libraries of today are no better. Their pages are pages of data files, but the electronic page images are equally passive. Now imagine the library as an active, intel- ligent "knowledge server." It stores the knowledge of the disciplines in complex knowledge structures (perhaps in a formal- ism yet to be invented). It can reason with this knowledge to satisfy the needs of its users. The needs are expressed naturally, with fluid discourse. The system can, of course, retrieve and exhibit (the electronic textbook). It can collect relevant information; it can summa- rize; it can pursue relationships. It acts as a consultant on specific prob- lems, offering advice on particular solutions, justifying those solutions with citations or with a fabric of general reasoning. If the user

39 ality of the information obtained from respondents, but guarantee only that information about specific individuals will not be released in such a way that they can be identified. The extent to which informed consent can be given to unknown future uses of survey data, in particular to their merger with other data sources, is of great concern to survey researchers. Controlling the eventual uses of merged, widely distributed data sets would be difficult. Another concern that needs to be addressed is one of responsibility in computer-supported decision making. Scientists, engineers, and clinicians more and more frequently will use complex software to help analyze and interpret their data. Who then is morally and legally responsible for the correctness of their interpretations, and of actions based on them? Experiments involving dangerous materials or human lives may soon be controlled by computers, just as many commercial aircraft landings are at present. Computers may be capable of faster or more precise determinations in some situations than humans. But software designers lack strong guidelines on assignment of responsibility in case of malfunction or unforeseen disaster, and lack the expertise to guarantee against malfunctions or disasters. With complex software overlaid on complex hardware, it is impossible to prove beyond a doubt in all circumstances that both hardware and software are performing precisely as they were specified to perform. Gaps in Training and Education The training and education necessary for using information technology are lacking. Two decades ago many researchers dealt with computers only indirectly through computer programmers who worked in data processing centers. The development of information technology has brought computing into the researcher's laboratory and office. As a result, the level of computing competence expected of researchers, their support staff, and their students has increased manyfold. can suggest a solution or a hypothesis it can check this, even suggest extensions. Or it can critique the user viewpoint, with a detailed rationale of its agreement or disagreement. . . . The user of the Library of the Future need not be a person. It may be another knowledge system that is, any intelligent agent with a need for knowledge. Such a Library will be a network of knowledge sys- tems, in which people and machines collab- orate. Publishing is an activity transformed. Au- thors may bypass text, adding their incre- ment to human knowledge directly to the knowledge structures. Since the thread of responsibility must be maintained, and since there may be disagreement as knowledge grows, the contributions are authored (inci- dentally allowing for the computation of roy- alties for access and use). Knowledge base maintenance ("updating") itself becomes a vigorous part of the new publishing industry. SOURCE: Edward A. Feigenbaum, 1986. Autoknowledge: From file servers to knowledge servers. In: Med~info 86. R. Salarnon, B. Blum, and M. Jorgensen, eds. New York: Elsevier Science Publishers B.V. (North-Holland). THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

40 INFORMATION Computers are changing what students need to learn. Undergraduate students TECHNOLOGY AND of chemistry, for example, need more than the standard courses in organic, THE CONDUCT inorganic, analytic, and physical chemistry; in the view of many practicing OF RESEARCH chemists, they should also have courses in calculus, differential equations, linear algebra, and computer simulation techniques, and through formal courses or practical research experience, should be competent in mathematical reasoning, electronics, computer programming, numerical methods, statistical analysis, and the workings of information management systems (Counts, 1987, Position Paper: The Impact of Information Technologies on the Productivity of Chemistry). Neither students nor researchers can obtain adequate training and education through one-time training courses. Because the numbers of new tools are multiplying, researchers need ways to continuously learn about, evaluate, and, if necessary, adopt these new tools. Using commercial programs and tutorial systems only partly alleviates the problem because the technologies often change faster than such supports can accommodate to the changes. Instructors in the uses of information technologies within the disciplines are rare. Senior research ers are especially hard hit. The Panel took no formal survey, but informal discussions suggest that most senior researchers have had exposure to no more than a one-semester programming course and have few of the skills needed to evaluate and use the available technology. For all researchers, learning advanced computing means taking a risk. They must interrupt their work and pay attention to something new and temporarily unproductive. They must become novices, often where sources of appropriate instruction and help are unclear or inaccessible. The investment of time and level of frustration are likely to be high. Understandably, many researchers cannot find the time and the confidence to learn technical computing; some justify their DOCUMENTS AS LINKED PIECES: HYPERTEXT The vision of computing technology revo lutionizing how we store and access knowl edge is as old as the computing age. In 1945 Vannevar Bush proposed MEMEX, an electro optical-mechanical information retrieval sys tem that could create links between arbitrary chunks of information and allow the user to follow the links in any desired manner. In the early 1960s, Ted Nelson introduced "hyper text," a fonn of Consequential writing: a text branches and allows choices to the reader, best read at an interactive screen. In 1968, Doug Englebart demonstrated a simple hy pertext system for hierarchically-structured documents-that is, a list of sections, each of which decomposes into a list of subsections, each of which decomposes into a list of paragraphs, and so on to which annotations could be added during a multiple-workstation conference. Today hypertext refers to infor- mation storage in which documents are pre- served as networks of linked pieces rather than as a single linear string of characters; readers can add links and follow links at will. Nelson's XANADU system is perhaps the most ambitious hypertext system proposed. XANAI)U would make all the world's knowl- edge accessible in a global distributed data- base to which anyone can add information,

41 choices with negative attitudes, for example: "I get enough communications as it is; I don't need a computer network," or "If I put my data on the computer, others will steal it," or "We are doing fine as things are; why change at this point?" Given these natural but negative attitudes, organizations are sometimes slow in responding to demands for new information technologies. Some research orga- nizations view these attitudes as unchangeable and wait to introduce advanced computing until existing researchers move or retire. Others are actively replacing personnel or creating new departments for computational researchers. Still others are attempting to change attitudes by giving researchers the necessary time and support systems. While we have no data on changes in productivity, there is some evidence that in organizations following the latter course, existing researchers at all ranks can achieve as high computing competence as new personnel (Kiesler and Sproull, 1987). Because people are now being introduced to computing skills at earlier stages of schooling, the lag in computer expertise is disappearing. Over time, alterna- tives to personal expertise in the form of user-friendly software or individual assistance from specialists will also develop. plunks of Organizational Change Changing an organization to make way for advanced information technology and its attendant benefits entails real risks. Administrators and research managers are often reluctant to incur the costs fi- nancial, organizational, behavioral-of new technology. In some cases, adminis- trators and research managers relegate computer resources-hardware, soft- ware, and people-based support services- to a lower priority than the procure- ment and maintenance of experimental equipment. The result can be a long-term suppression of the development and use of the tools of information technology. and in which anyone can browse or search for information. A document is a set of one or more linked nodes of text, plus links to nodes already in the global database; a document may be mostly links, constructed out of pieces already in the database. Users pay a fee proportional to the number of characters they have stored. Anyone accessing an item in the global database pays an access charge, a portion of which is returned to the owner as a royalty. Individuals can store private docu- ments mat cannot have public links pointing to them and can attach annotations to public documents that become available to everyone reading those documents. Documents can be composed of different parts including text, graphics, voice, and video. INTERMEDL\, a hypertext system with some of these proper- ties, has been implemented at Brown Univer- sibr and has been used to organize informa- tion in a humanities course for presentation to students. Small-scale hypertext systems, such as Apple's Hypercards for the Macin- tosh, are available on personal computers; their promoters claim these systems will change information retrieval as radically as spreadsheets changed accounting a few years ago. SOURCE: Peter and Dorothy Denning, personal commu nication, 1987. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH See box on electronic laboratory notebook, page 42.

42 INFORM`\TION In other cases, administrators are misled into underestimating the time and TECHNOLOGY AND resources required to deploy new information technology. Efforts to develop TElE CONDUCT effective networks have been insufficiently supported by government planners OF RESEARCH and research institution administrators, who have been led to assume that technology and services to provide network access are easily put in place. Some administrators have promoted change, but without adequate planning for the resources or infrastructure needed to support users. Problems such as these are exacerbated by overly optimistic advice given the administrators by technological enthusiasts. This particular impediment probably cannot be overcome. It can, however, be alleviated by establishing collaborative arrangements to develop plans for and share the costs of change. EDUCOM, for example, is a consortium of research universities with large computing resources that promotes long-range planning and sharing of resources and experiences. Absence of Infrastructure Most fundamental of all the institutional and behavioral impediments to the use of information technology is the absence of an infrastructure that supports that use. Just as use of a large collection of books is made possible by a building and shelves in which to put them, a cataloguing system, borrowing policies, and reference librarians to assist users, so the use of a collection of computers and computer networks is supported by the existence LEGAL CONSTRAINTS TO AN ELECTRONIC VERSION OF A LABORATORY NOTEBOOK Today, the paper laboratory notebook is the only legally supportable document for patent applications and other regulatory pro cedures connected with research. Some or ganizations, however, routinely distribute electronic versions of laboratory notebook information to managers and other profes sionals who would otherwise have to visit the research site physically or request photo copies. The benefits of legal electronic note books are speculative but attested to by those using them informally (Liscouski, 1987~. First, they would help give researchers access to information or expertise that is otherwise lost because people have moved or reside in dif ferent departments. Second, they would al low research managers and researchers to observe and compare changes in results over time. Third, they would eliminate or make easier the assembly of paper versions of doc- uments needed for government agencies. The barrier to an electronic notebook is social its lack of acceptance as a legal document. Such acceptance could take place if legal conditions for an electronic system storage, format, security were delineated. However, researchers, scientific associations, and gov- ernment agencies have failed to develop such guidelines. This failure is probably connected to the traditions of privacy in laboratory note- books, to the inability to forecast how an electronic system would stand up in court, (and related to that, the risk and unacceptable cost to any single institution of developing a system), and to the uncertainty of the ulti- mate benefits on some widely accepted index of research effectiveness. Whatever the rea- sons, the end result is that a complete and accepted electronic notebook remains unde- veloped.

43 of institutions, services, policies, and experts in short, by an infrastructure. On the whole, information technology is inadequately supported by current infra- structures. An infrastructure that supports information technology applications to re- search should provide · Access to experts who can help; · Ways of supporting and rewarding these experts; · Tools for developing software, and a market in which the tools are evaluated against one another and disseminated; · Communication links among researchers, experts, and the market; and · Analogs to the library, places where researchers can store and retrieve information. Several different kinds of experts in information technology help researchers. Some are specialists in research computing. Some are programmers who develop and maintain software specific to research. Others are specialists who carry out searches. Still others are "gatekeepers," who help with choices of software and hardware. Gatekeepers are members of an informal network of helpers centered around advocates and specialists, experts in both a discipline and in inflation technology who become known by reputation. Overdependence on gatekeepers creates other problems: as with any informal service, some advice received may be narrowly focused or simply wrong and the number of persons wanting free information often becomes larger than the number of persons able to provide it. As a result, the gatekeepers may become overloaded and eventually retreat from their gatekeeping roles. To hold on to expert help of all types, research and funding institutions must find ways of supporting and rewarding it. While institutions and disciplines have evolved ways of rewarding researchers publication in refereed journals, promo- tion, tenure no such systems yet reward expert help. Another aspect of the needed infrastructure is some formal provision for developing and disseminating software for specific research applications. Tools for constructing reliable, efficient, customized, and well-documented software are not used in support of scientific research. Computer science, as a supporting discipline, needs to facilitate rapid delivery of finished software, and easy extension and revision of existing software. The Department of Defense has recently pioneered the creation of a Software Engineering Institute at Carnegie Mellon University. Efforts to create tool building and research resources for nondefense software are worth encouraging. Development and dissemination of scientific software could be speeded in many cases by adoption of emerging commercial standards. These standards are supported by many vendors for a variety of computing environments. The temptation to narrowly match software to specific applications should be resisted in favor of standard approaches. THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

44 INFORMATION Software, once developed, needs to be evaluated and disseminated. The TECHNOLOGY AND research establishment now evaluates research Information principally through THE CONDUCT peer review of funding proposals and manuscripts submitted for publication. OF RESEARCH SoDw~ needs to be dent with in a simper manner. EDUCOM has recently announced its support of a peer-review process for certain kinds of academic software. Other prototypes of systems for evaluating and disseminating software already exist (see boxes on BIONET and on IBM's software market). These See software market, box prototypes couple an electronic "market," through which software can be disseminated, with a conferencing capability that allows anyone with access to contribute to the evaluation of the market wares. The system provides an extremely important feature: those contributors who are most successful in the open market can automatically be identified and given credit in much the same way as authors of books and research papers now are. The infrastructure for information technology also depends on communica- tion links. The Panel believes that one of the most important services that computer networks can provide is the link between users and expert help. Existing links often take the form of electronic bulletin boards on various networks; other mechanisms also exist. Until more formal mechanisms come about, open communication with pioneers, advocates, and enthusiasts is one of AN EXA1MPLE OF A SOFTWARE MARKET INFRASTRUCTURE: IBM RESEARCH IBM's internal computer network connects over 2,000 individual computers worldwide, providing IBM's researchers, developers, and other employees with communications facil- ities such as electronic mail, file transfers, and access to remote computers. In recent years, software repositories and online con- ferencing facilities have grown and flour- ished, and become one of the primary uses of the network. With a single command, any IBMer has access to some 3,000 software packages, developed by other IBMers around the world and made available through the network. Many of these packages are com- puter utilities and programming tools, but others are tools for research. They include statistical and graphics applications, simula- tion systems, end AI and expert system shells, as well as many everyday utilities to make general use of the computer simpler. The high level of interconnection offered by the network and the centralization of informa- tion offered by the repositories allows scien- tists with a particular need to see if software to satisfy that need is available, to obtain it if it is, and to develop it if it is not, with confidence that they are not duplicating the efforts of some colleague. The online conferences (public special- purpose electronic bulletin boards), which are as widespread and accessible as the soft- ware repositories, allow users of the software (and of commercial and other software) to exchange experiences, questions, and prob- lems. These conferences provide a form of peer review for the software developer. For internally developed software, they provide a fast and convenient channel between the soft- ware author and the users; authors with an interest in improving their programs have instant access to user suggestions and to

the best ways to allow new technologies to be disseminated and evaluated by research communities. A final piece of infrastructure largely missing is housing and support for the storing and sharing of information. Such a function could be performed by disciplinary groups or, more generally, at the university level. Many university libraries have a professional core staff whose members hold faculty rank and function not only as librarians but also as researchers and teachers. Some university computer centers operate similarly. National laboratories, like astro- nomical observatories and accelerator facilities, have a core staff of astronomers or physicists whose main task is to serve outside users while also maintaining their own research programs. The existence of such a professional staff involved in the storage and retrieval of information for a discipline would provide a means of recognizing, rewarding, and providing status to these people. In some cases, a university might wish to consider integrating its information science department with its computer center and its library. eager testers. Users with a special need or a hard question have equally fast access to the author for enhancements or answers. The conferences also allow users with common interests to exchange other sorts of information in the traditional bulletin board style. AI researchers debate the usefulness of the concept of intentionality or discuss how software engineering methodologies apply to expert systems development; computer graphics and vision workers talk about the number of bits required to present a satisfac- to~y image to the human eye. Over 100 individual conferences support thou- sands of separate discussions about computer ~ and software and visual all other an peck of IBM's under. The sol repos itories provide a "reviewed" set of tools and appli- cations for a broad population on a wide spec- trum of problems. The organization that originally sets up a repository or a conference generally provides user support for it (answering "how to do it" questions), and installation and maintenance of local services is usually handled either by an onsite group that has an interest in the specialty served by the facility, or on a more formal basis by the local Information Sys- tems department. The benefits of these repositories and con- ferences are at least as widely distributed and probably even harder to quantify, but the success of these software libraries and online conferences within IBM should serve as an encouraging sign for others with the same sorts of needs. A market can be made to suc- ceed, provided that high levels of stan~iza- tion and compatibility in both hardware and software can be achieved. Such levels of in- teroperability have, so far, been easier to achieve at commercial institutions such as IBM Research than at research universities. such as IBM Research than at research universities. 45 THE USE OF INFORMATION TECHNOLOGY IN RESEARCH

Computers and telecommunications have revolutionized the processes of scientific research. How is this information technology being applied and what difficulties do scientists face in using information technology? How can these difficulties be overcome?

Information Technology and the Conduct of Research answers these questions and presents a variety of helpful examples. The recommendations address the problems scientists experience in trying to gain the most benefit from information technology in scientific, engineering, and clinical research.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

  • Increase Font Size

9 Application of Computer in Research

P.G. Padma Gowri

Introduction :

The computers are the emerging tool in the research process. The main components of Computers are an input device, a Central Processing Unit and an output device. It is an essential tool for research, whether for academic purpose or for commercial purpose. Computers play a major role today in every field of scientific research from genetic engineering to astrophysics research.

Computers with internet led the way to a globalized information portal that is the World Wide Web. Using WWW, researcher can conduct research on massive scale. Various programs and applications have eased our way into computing our research process. In this module, various computer software applications and tools are discussed with respect to research activities like data collection, analysis, etc.

Objectives:

  • Understand the Features of computers.
  • To know various steps involved in research process.
  • Role of Computers in Research Publication.
  • Introduction of Analysis Tools used in research process.

Features of a computer :-

There are many reasons why computers are so important in scientific research and here are some of the reasons are:

SPEED: computer can process numbers and information in a very short time. So researcher can process and analyze data quickly. By saving time researcher can conduct further research. A calculation that may take a person several hours to process will take computer mere minutes, if not seconds.

STORAGE DEVICE – Computer can store and retrieve huge data. It can be used when needed.

There is no risk of forgetting and loosing data.

ACCURACY: Computer is incredibly accurate. Accuracy is very much important in scientific research. Wrong calculation could result an entire research or project being filled with incorrect information.

ORGANIZATION: We can store millions of pages of information by using simple folders, word processors & computer programs. Computer is more productive & safer than using a paper filing system in which anything can be easily misplaced.

CONSISTENCY: computer cannot make mistakes through “tiredness” or lack of concentration like human being. This characteristic makes it exceptionally important in scientific research. Large calculations can be done with accuracy and speed with the help of computer.

Automatic Device – The programs which are run on computer are automatic through some  instructions

Computational Tools

Computers started for the use of powerful calculators, and that service is important to research today. Huge amount of data can process with the help of computer’s. Statistical programs, modelling programs and spatial mapping tools are all possible use of computers. Researchers can use information in new ways, example layering different types of maps on one another to discover new patterns in how people use their environment.

C ommunication

Building knowledge through research requires communication between experts to identify new areas requiring research and debating results and ideas. Before the invention of computers, this was accomplished through papers and workshops. Now, the world’s experts can communicate via web chatsor email. Information can be spread various ways example by virtual conferences

Researchers can take computers anywhere, it is easier to conduct field research and collect large amount of data. New areas of research in remote areas or at a community level are carried out by the mobility of computers. Social media sites have a new medium for interaction with society and collect the information.

The Steps in Research Process

Research process consists of series of actions necessary to carry out research work effectively  The sequencing of these steps listed below

  • Formulating the research problem;
  • Extensive literature survey;
  • Developing the hypothesis;
  • Preparing the research design;
  • Determining sample design;
  • Data Collection;
  • Project Execution;
  • Data Analysis;
  • Hypothesis testing;
  • Generalizations and interpretation,
  • Preparation of the report or presentation of the results, i.e., formal write-up of conclusions of the research.

Computers in Research

Computers are used in scientific research extremely and it is an important tool .Research process can also be done through computers. Computers are very useful and important tool for processing huge number of samples. It has many storage devices like compact discs and auxiliary memories. Data can be used from these storage devices and retrieved later on. There are various steps necessary to effectively carry out research and the desired sequencing of these steps in the research process. This data can be used for different phases of research process.

There are five major phases of the research process:

  • Conceptual phase
  • Design and planning phase
  • Data collection phase
  • Data Analysis phase and
  • Research Publication phase

Conceptual Phase and Computer

The conceptual phase consists of formulation of research problem, extensive literature survey, theoretical frame work and developing the hypothesis.

Computer helps in searching the existing literature in the relevant field of research. It helps in finding the relevant existing research papers so that researcher can find out the gap from the existing literature. Computers help for searching the literatures and bibliographic reference stored in the electronic database of the World Wide Web’s.

It can be used for storing relevant published articles to the retrieved whenever needed. This has the advantage over searching the literatures in the form of journals, books and other newsletters at the libraries which consume considerable amount of time and effort.

Bibliographic references can also be stored in World Wide Web. In the latest computers, references can be written easily in different styles. Researcher need not visit libraries .It helps to increase time for research. It helps researchers to know how theoretical framework can be built.

Design and Planning Phase and Computer

Computer can be used for, deciding population sample, questionnaire designing and data collection. They are different internet sites which help to design questionnaire. Software’s can be used to calculate the sample size. It makes pilot study of the research possible. In pilot study, sample size calculation, standard deviations are required. Computer helps in doing all these activities.

Role of Computers in Data collection phase

Empirical phase consists of collecting and preparing the data for analysis:

In research studies, the preparation and computation of data are the most labor-intensive and time consuming aspect of the work. Typically the data will be initially recorded on a questionnaire or record for suitable for its acceptance by the computer. To do this the researcher in connection with the statistician and the programmer, will convert the data into Microsoft word file or excel spreadsheet or any statistical software data file. These data can be directly used with statistical  Software’s for analysis.

Data collection and Storage:

The data obtained from the research subjects are stored in computes in the form of word files or excel spread sheets or any statistical software data file. This has the advantage of making  necessary corrections or editing the whole layout of the tables if needed, which is impossible or time consuming incase of writing in hand written. Thus, computers help in data editing, data entry, and data management including follow up actions etc. computers also allow for greater flexibility in recording and processing the data while they are collected as well as greater ease during the analysis of these data.

Data exposition:

The researchers are anxious about seeing the data: what they look like; how they are distributed etc. Researchers also examine different dimension of variables or plot them in various charts using a statistical application.

Data Analysis and Computer:

Data Analysis and Computer phase consist of the analysis of data, interpretation and hypothesis testing. Data analysis phase consist of statistical analysis of the data and interpretation of results. Data analysis and interpretation can be done with the help of computers. For data analysis, software’s available. These software help in using the techniques for analysis like average, percentage, correlation and all the mathematical calculations.

Software’s used for data analysis are SPSS, STATA, SYSAT etc. Computers are useful not only for statistical analysis, but also to monitor the accuracy and completeness of the data as they are collected. This software’s also display the results in graphical chart or graph form.

Computers are used in interpretation also. They can check the accuracy and authenticity of data. It helps is drafting tables by which a researcher can interpret the results easily. These tables give a clear proof of the interpretation made by researcher.

Role of Computer in Research Publication

After interpretation, computer helps is converting the results into a research article or report which can be published. This phase consists of preparation of the report or presentation of the results, i.e., formal write-up of conclusions reached. This is the research publication phase. The research article, research paper, research thesis or research dissertation is typed in word  processing software and converted to portable data format (PDF) and stored and/or published in the world wide web. Online sites are available through we can convert our word file into any format like html, pdf etc.

Various online applications are also available for this purpose. Even one can prepare our document using online word processing software and can store/edit/access it from anywhere using internet.

References and computer:

After completing the word document, a researcher need to give source of the literature studied and discussed in references. Computers also help in preparing references. References can be written in different styles. All the details of author’s journals, publication volume Books can be filled in the options “reference‟ given in computer and it automatically change the information into the required style. Software used to manage the references.

A researcher needs not to worry about remembering all the articles from where literature in taken, it can be easily managed with the help of computers.

Simulation:

Simulation is the imitation of the operation of a real-world process or system over time. Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, testing, training, education, and video games. Often, computer experiments are used to study simulation models. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is mainly used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist. Using computers the simulation in research carried out in various fields.

Role of Computers in Scientific Research:

There are various computer applications used in scientific research. Some of the most important applications used in scientific research are data storage, data analysis, scientific simulations, instrumentation control and knowledge sharing.

Data Storage

Experimentation is the basis of scientific research. Scientific experiment in any of the natural sciences generates a lot of data that needs to be stored and analyzed to derive important conclusions, to validate or disprove hypotheses. Computers attached with experiential apparatuses, directly record data as its generated and subject it to analysis through specially designed software. Data storage is possible in SPSS data file, lotus spreadsheet, excel spreadsheet, DOS text file etc

Data Analysis

Analyzing Huge number of statistical data is made possible using specially designed algorithms that are implemented by computers. This makes the extremely time-consuming job of data analysis to be matter of a few minutes. In genetic engineering, computers have made the sequencing of the entire human genome possible. Data got from different sources can be stored and accessed via computer networks set up in research labs, which makes collaboration simpler.

Scientific Simulations

One of the prime uses of computers in pure science and engineering projects is the running of simulations. A simulation is a mathematical modeling of a problem and a virtual study of its possible solutions.

For example, astrophysicists carry out structure formation simulations, which are aimed at studying how large-scale structures like galaxies are formed. Space missions to the Moon, satellite launches and interplanetary missions are first simulated on computers to determine the best path that can be taken by the launch vehicle and spacecraft to reach its destination safely.

Instrumentation Control

Most advanced scientific instruments come with their own on-board computer, which can be programmed to execute various functions. For example, the Hubble Space Craft has its own onboard computer system which is remotely programmed to probe the deep space. Instrumentation control is one of the most important applications of computers.

Knowledge Sharing through Internet

In the form of Internet, computers have provided an entirely new way to share knowledge. Today, anyone can access the latest research papers that are made available for free on websites. Sharing of knowledge and collaboration through the Internet has made international cooperation on scientific projects possible.

Through various kinds of analytical software programs, computers are contributing to scientific Research in every discipline, ranging from biology to astrophysics, discovering new patterns and providing novel insights.

When the work in neural network based artificial intelligence advances and computers are granted with the ability to learn and think for them, future advances in technology and research will be even more rapid.

Tools and Applications Used In the Research Process Statistical Analysis Tool: SPSS

SPSS is the most popular tool for statisticians. SPSS stands for Statistical Package for Social Sciences.

It provides all analysis facilities like following and many more.

  • Provides Data view & variable view
  • Measures of central tendency & dispersion
  • Statistical inference
  • Correlation & Regression analysis
  • Analysis of variance
  • Non parametric test
  • Hypothesis tests: T-test, chi-square, z-test, ANOVA, Bipartite variable….
  • Multivariate data analysis
  • Frequency distribution
  • Data exposition by using various graphs like line, scatter, bar, ogive, histogram,

Data Analysis Tool:

Spreadsheet Packages

A spreadsheet is a computer application that simulates a paper worksheet. It displays multiple cells that together make up a grid consisting of rows and columns, each cell containing eitheral phanumeric text or numeric values. Microsoft Excel is popular spreadsheet software. Others spreadsheet packages are Lotus 1-2-3Quattro Pro, Javeline Plus, Multiplan, VisiCalc, Supercalc, Plan Perfect etc.

Other Statistical Tool

SAS, S-Plus, LISREL, Eviews etc.

Word Processor Packages

A word processor (more formally known as document preparation system) is a computer application used for the production (including composition, editing, formatting, and possibly printing) of any sort of printable material.

The word processing packages are Microsoft Word, WordStar, Word perfect ,Amipro etc.

Presentation Software

A presentation program is a computer software package used to display information, normally in the form of a slide show. It typically includes three major functions: an editor that allows text inserted and formatted a method for inserting and manipulating graphic images and a slideshow system to display the content. The presentation packages are Microsoft Power point, Lotus Freelance Graphics, Corel Presentations, Apple keynote etc.

DATABASE MANAGEMENT PACKAGES (DBMS)

Database is an organized collection of information. A DBMS is a software designed to manage adatabase. Various Desktop Databases are Microsoft Access, Paradox, Dbase or DbaseIII+, FoxBase, Foxpro/ Visual Foxpro, FileMaker Procommercial Database Servers that supports multiuser are Oracle, Ms-SQL Server, Sybase, Ingres, Informix, DB2 UDB (IBM), Unify, Integral, etc.

Open source Database packages are MySQL, PostgreSQL, and Firebird etc. BROWSERS  A web browser is a software application which enables a user to display and interact with text, images, videos, music, games and other information typically located on a Web page at a website on the World Wide Web or a local area network.

Examples are Microsoft Internet explorer, Mozilla firefox, Opera, Netscape navigator, Chrome.

Computer has helped in serving the difficulties faced by human beings. By the passing of time, computers have been reduced from a size of room to six of human palm. Computer performs many functions and does variety of jobs with speed and accuracy.

Today, life has become impossible without computers. It is used in Schools, Colleges and has become indispensable part of every business or profession. Research is also an area where computer are playing a major role.

Use of computer in research in science is so extensive that it is difficult to conceive today are search project without computer. Many research studies cannot be carried out without use of computer particularly those involving complex computations, data analysis and modeling. Computer in scientific research is used at all stages starts from study, proposal/budget stage to submission/presentation of findings.

  • https://simple.wikipedia.org/wiki/Computer
  • https://www.elsevier.com/journals/computers-and…research
  • www.sciencedirect.com/journal/computers

Free Application Code

A $25 Value

Ready to take the next step? Pick your path. We'll help you get there. Complete the form below and receive a code to waive the $25 application fee.

CSU Global websites use cookies to enhance user experience, analyze site usage, and assist with outreach and enrollment. By continuing to use this site, you are giving us your consent to do this. Learn more in our Privacy Statement .

Colorado State University Global

Home

  • Admission Overview
  • Undergraduate Students
  • Graduate Students
  • Transfer Students
  • International Students
  • Military & Veteran Students
  • Non-Degree Students
  • Re-Entry Students
  • Meet the Admissions Team
  • Tuition & Cost
  • Financial Aid
  • Scholarships
  • Colorado Residents
  • Military Benefits
  • Net Price Calculator
  • Student Success Overview
  • What to Expect
  • Academic Support
  • Career Development
  • Offices & Services
  • Course Catalog
  • Academic Calendar
  • Student Organizations
  • Student Policies
  • About CSU Global
  • Mission & Vision
  • Accreditation
  • Why CSU Global
  • Our Faculty
  • Industry Certifications
  • Partnerships
  • School Store
  • Commitment to Colorado
  • Memberships & Organizations
  • News Overview
  • Student Stories
  • Special Initiatives
  • Community Involvement

Why is Computer Science So Important?

  • December 1, 2021

People Studying Computer Science Online

Recently, we wrote about what computer science graduates do , then we explained how to get a job in the field , and here we’re going to explore what makes computer science important in the first place.

As part of this discussion, we’ll cover what computer science professionals contribute to modern businesses, what role they play in maintaining, creating, and pushing the boundaries of technology, and why you should consider launching a career in the industry.

After you’ve learned everything you need to know about why computer science is important, fill out our information request form to receive additional details about CSU Global’s 100% online Bachelor’s Degree in Computer Science , or if you’re ready to get started, submit your application today.

What is Computer Science and Why is it so Important?

Computer science is the process of solving complex organizational problems using technical solutions.

The reason this is such an important field is that computers and technology have been integrated into virtually every economic sector, industry, and even organization operating in the modern economy.

Professionals working in computer science roles are responsible for some of the most important tasks needed to keep businesses running, including:

  • Analyzing the impact of computers and computing on individuals, organizations, and society.
  • Designing, building, maintaining, and updating software systems of varying complexity.
  • Building, implementing, and evaluating computer-based systems and processes.
  • Leveraging technical solutions to solve complicated problems.

Everywhere you look, you’ll find computers and other technological systems or devices powering business decisions and operations.

It’s virtually impossible to run a modern business without utilizing computer-driven technology, which is just one of the many reasons why people consider computer science to be so important.

Where Do Computer Scientists Work?

If you think that computer scientists can only find work at tech organizations, then you may be pleasantly surprised to learn that this is nowhere near true.

People working in computer science roles can find jobs throughout the modern economy, working in different environments at virtually every type of organization.

There are so many different job titles for people trained in computer science that it’d be impossible to provide a full list of them all, but here’s a shortlist of potential job titles you’d be able to pursue after earning your degree in the field:

  • Web developer
  • User interface designer
  • Systems analyst
  • Software tester
  • Software quality assurance manager
  • Software engineer
  • Software developer
  • Research and development (R&D) scientist
  • Product manager
  • Network architect
  • Mobile application designer or developer
  • Information technology specialist
  • Information security analyst
  • Full-stack developer
  • Engineering manager
  • Database administrator
  • Data scientist
  • Computer scientist or computer science researcher
  • Computer science professor
  • Cloud computing engineer
  • Chief information security officer
  • Business analyst
  • Artificial intelligence and machine learning engineer

As you might imagine, the skills you’d develop studying computer science can be applied to a huge variety of applications across nearly every industry.

What Do Computer Scientists Actually Do?

As we mentioned above, getting your degree in computer science opens doors to all sorts of different job titles and career niches.

And as you might imagine, the specific daily responsibilities for each of these roles can vary significantly, so it’s nearly impossible to provide an accurate description of what somebody working in computer science does. 

What’s important to realize here is that completing a computer science degree would likely provide you with a vast array of potential job options.

And should you choose to study computer science, four of the best job titles you might want to focus on pursuing include:

  • Computer and Information Research Scientists /  2021 Median Pay: $131,490
  • Software Developers, Quality Assurance Analysts, and Testers /  2021 Median Pay: $109,020
  • Computer Systems Analysts /  2021 Median Pay: $93,730
  • Computer Programmers - 2021 Median Pay: $93,000

As you can see, these roles provide excellent rates of compensation, which makes computer science a great major for anyone who wants to maximize their earning potential.

Should You Pursue a Career in Computer Science?

That depends on what you want to do for work, but if you’re interested in technology and you want to work with computers or technical systems and solutions, then this could be the perfect field for you.

There are many good reasons to think about studying computer science, but three of the most compelling reasons to specialize in this area include:

  • Demand for computer science professionals continues to rise and is projected to grow steadily over the next decade. 
  • Computer science professionals play a critical role wherever they work, developing complex solutions that help organizations overcome a variety of difficult challenges.
  • Jobs in computer science tend to pay excellent salaries.

To help give you a little more context on why you should take the opportunity to study computer science so seriously, let’s look at each of these in a bit more detail.

Demand for Computer Science Professionals is Projected to Continue Growing

The BLS reports that the employment of computer scientists will rise considerably over the next decade.

In fact, according to the BLS’s latest projections, demand for 3 of the best roles in the industry is set to explode between 2021 and 2031:

  • Computer and Information Research Scientists - 21% growth
  • Software Developers, Quality Assurance Analysts, and Testers - 25% growth
  • Computer Systems Analysts - 9% growth

Knowing that employment rates are going to rise by so much over the next decade, you can be confident that you’ll graduate from a Bachelor’s Degree program in Computer Science into a healthy job market looking to employ people with your particular skill set.

This is a great way to help increase the chances that your degree will make a measurable, positive impact on your career by helping you to secure stable employment in a growing, lucrative industry.

Computer Scientists Are Critical to Organizational Success

Because computer scientists are responsible for so many important tasks at modern organizations, they play a critical role in ensuring the health of nearly every business.

No matter which area of computer science you choose to specialize in, you can be nearly certain that the work you do will be important and useful to whatever organization ends up employing you.

According to a recent survey by Peldon Rose , employees report that feeling appreciated is the most important factor in determining their happiness at work, and playing a central, valued role as a computer science professional could be a great way to improve the chances that you’ll truly enjoy what you do each day throughout your career.

Playing a critical role won’t just make you feel good though, and as we’re about to see, it might also help ensure that you’re well paid.

Computer Scientists Have Excellent Earnings Potential 

Skilled computer scientists can earn considerable incomes, especially after finding themselves employed in some of the industry’s best jobs.

The BLS reports great rates of compensation for some of the more popular jobs in computer science, including:

  • Computer Programmers /  2021 Median Pay: $93,000
  • Computer Systems Analysts /  2021 Median Pay: $99,270

And while this is just a very short list of the many potential positions you could pursue after completing your B.S. in Computer Science, the good news is that other jobs in this field also tend to pay relatively high salaries.

Income isn’t the only thing that matters to people, but if you do feel it’s important to focus your efforts on building a career in a field with good earning potential, then you might be hard-pressed to find a better niche than computer science. 

H ow to Launch Your Career in Computer Science 

It’s certainly possible to break into the field without a degree, but you’re likely to have an easier time landing your first job, and landing a good job, if you’ve got the education credentials to prove that you know what you’re doing.

Why? Because computer science positions are so important to business success that hiring managers looking to fill open positions are extremely likely to prefer candidates with a degree on their resume.

Getting your degree in computer science proves that you’ve dedicated the time and effort required to develop your knowledge, skills, and abilities in the field, increasing the chances that you’ll be able to deliver real value to an organization.

Completing a B.S. in Computer Science certainly won’t guarantee that you’re able to get a job in the field, but it should allow you to pursue good jobs in the industry with the full confidence that you’re prepared to make a meaningful contribution wherever you choose to apply.

Should You Get Your Computer Science Degree Online?

Yes, you should think about getting your degree in CS online, and from CSU Global.

Our accelerated program was designed to be completed entirely online, and it provides much more flexibility and freedom than a competing on-campus program.

Studying online with us will make it far easier to juggle your studies with work and family responsibilities, as we provide several significant benefits, including:

  • No requirements to attend classes at set times or locations.
  • Access to monthly class starts.
  • Accelerated, eight-week courses.

If you’re looking for a streamlined, flexible online degree program that interferes as little as possible with your other responsibilities, then you should choose to study online with us.

Why Should You Pick CSU Global’s Online Computer Science Program?

Our online Bachelor’s Degree in Computer Science program is designed to provide you with the skills and knowledge you need to launch a successful lifelong career in this challenging, but lucrative industry.

You can be sure that your degree will be respected by potential employers since our program is regionally accredited by the Higher Learning Commission , but also because it recently earned the #1 ranking for Best Computer Networks Degree Programs in 2021 by Best Value Schools .

CSU Global itself is also widely regarded as a leader in online education, having recently been awarded several distinguished rankings, including: 

  • A #1 ranking for Best Online Colleges & Schools in Colorado from Best Accredited Colleges .
  • A #1 ranking for Best Online Colleges in Colorado from Best Colleges .
  • A #10 ranking for Best Online Colleges for ROI from OnlineU .

To make sure that our program delivers real-world value, all of our faculty have recent experience in the field, and our curriculum is aligned with criteria for industry-leading certifications, including the Oracle Certified Associate, Java SE 8 Programmer, and the C++ Certified Associate Programmer from the C++ Institute.

Finally, to help save you money on the cost of your degree, we offer competitive tuition rates and a Tuition Guarantee which ensures that your affordable tuition rate can’t increase between enrollment and graduation.

To get additional details about our fully accredited, 100% online Bachelor’s Degree in Computer Science , please give us a call at 800-462-7845, or fill out our Information Request Form .

Ready to get started today? Apply now !

Research in Computer Science Education

  • First Online: 06 August 2020

Cite this chapter

importance of research in computer science

  • Orit Hazzan   ORCID: orcid.org/0000-0002-8627-0997 4 ,
  • Noa Ragonis   ORCID: orcid.org/0000-0002-8163-0199 5 &
  • Tami Lapidot 4  

1485 Accesses

Computer science education research refers to students’ difficulties, misconceptions, and cognitive abilities, activities that can be integrated in the learning process, usage of visualization and animations tools, the computer science teachers’ role, difficulties and professional development, and many more topics. This meaningful shared knowledge of the computer science education community can enrich the prospective of computer science teachers’ professional development. The chapter exposes the MTCS students to this rich resource and let them practice ways in which they can use it in their future work. This knowledge may enhance lesson preparation, kind of activities developed for learners, awareness to learners’ difficulties, ways to improve concept understanding, and testing and grading learners’ projects and tests. We first explain the importance of exposing the students to the knowledge gained by the computer science education research community. Then, we demonstrate different topics addressed in such research works and suggest activities to facilitate in the MTCS course with respect to this research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

See https://dl.acm.org/

The 2019 conference website is: https://sigcse2019.sigcse.org/

See https://www.eurekalert.org/pub_releases/2019-03/afcm-ttc022719.php

The 2019 conference website is: https://iticse.acm.org/

The 2019 conference website is: http://cyprusconferences.org/issep2019/

See https://dl.acm.org/citation.cfm?id=J688&picked=prox

See https://inroads.acm.org/

See https://www.tandfonline.com/loi/ncse20

See https://toce.acm.org/

The 2019 conference website is: https://www.wipsce.org/2019/

The 2018 conference website is: http://infotech.scu.edu.au/~ACE2018/

The 2019 conference website is: https://icer.acm.org/

The 2019 conference website is: http://cseducation.org/

Armoni M (2009) Reduction in computer science: a (mostly) quantitative analysis of reductive solutions to algorithmic problems. JERIC 8(4):1–30

Google Scholar  

Baloian N, Luther W, Sánchez J (2002) Modeling educational software for people with disabilities: theory and practice. In: Proceedings of 5th international ACM conference on assistive technologies, pp 111–118

Ben-Bassat Levy R, Ben-Ari M (2007) We work so hard and they don’t use it: acceptance of software tools by teachers. In: Proceedings of the 12th annual SIGCSE conference on innovation and technology in computer science education (ITiCSE ‘07). ACM, New York, pp 246–250

Ben-Bassat Levy R, Ben-Ari M (2008, June) Perceived behavior control and its influence on the adoption of software tools. SIGCSE Bull 40(3):169–173

Ben-Bassat Levy R, Ben-Ari M, Uronen PA (2003) The jeliot 2000 program animation system. Comput Educ 40(1):1–15

Blum L, Cortina TJ (2007) CS4HS: an outreach program for high school CS teachers. ACM SIGCSE Bull 39(1):19–23

Boom K, Bower M, Arguel A, Siemon J, Scholkmannn A (2018) Relationship between computational thinking and a measure of intelligence as a general problem-solving ability. In: Proceedings of the 23rd annual ACM conference on innovation and Technology in Computer Science Education (ITiCSE 2018). ACM, New York, pp 206–211

Börstler J, Hilburn TB (2016, March) Team projects in computing education. ACM Trans Comput Educ 16(2):Article 4

Brandes O, Armoni M (2019) Using action research to distill research-based segments of pedagogical content knowledge of K-12 computer science teachers. In: Proceedings of the 2019 ACM conference on Innovation and Technology in Computer Science Education (ITiCSE ‘19). ACM, New York, pp 485–491

Brandes O, Vilner T, Zur E (2010) Software design course for leading CS in-service teachers. In: Hromkovič J, Královič R, Vahrenhold J (eds) Teaching fundamentals concepts of informatics. ISSEP 2010, Lecture notes in computer science, vol 5941. Springer, Berlin/Heidelberg, pp 49–60

Bunde DP, Graf M, Han D, Mache J (2014) Parallel programming paradigms illustrated (abstract only). In: Proceedings of the 45th ACM technical symposium on computer science education (SIGCSE ‘14). ACM, New York, pp 722–722

Chaffin A, Doran K, Hicks D et al (2009) Experimental evaluation of teaching recursion in a video game. In: Proceedings of the 5th ACM SIGGRAPH symposium on video games, pp 79–86

Clarke-Midura J, Poole FJ, Pantic K, Sun C, Allan V (2018) How mother and father support affect Youths’ interest in computer science. In: Proceedings of the 2018 ACM conference on international computing education research (ICER ‘18). ACM, New York, pp 215–222

Cross J, Hendrix D, Barowski L, Umphress D (2014) Dynamic program visualizations: an experience report. In: Proceedings of the 45th ACM technical symposium on computer science education (SIGCSE ‘14). ACM, New York, pp 609–614

Dark MJ, Winstead J (2005) Using educational theory and moral psychology to inform the teaching of ethics in computing. In: Proceedings of Information Security Curriculum Development conference, InfoSecCD, pp 27–31

de Raadt M (2007) A review of Australasian investigations into problem solving and the novice programmer. Comput Sci Educ 17(3):201–213

de Raadt M, Toleman M, Watson R (2004) Training strategic problem solvers. ACM SIGCSE Bull 36(2):48–51

Denier S, Sahraoui H (2009) Understanding the use of inheritance with visual patterns. In: Proceedings of 3rd international symposium on empirical software engineering and measurement, pp 79–88

Dryer A, Walia N, Chattopadhyay A (2018) A middle-school module for introducing data-mining, big-data, ethics and privacy using RapidMiner and a Hollywood theme. In: Proceedings of the 49th ACM technical symposium on computer science education (SIGCSE ‘18). ACM, New York, pp 753–758

Edwards SH (2003) Rethinking computer science education from a test-first perspective. In: Proceedings of the 18th annual ACM SIGPLAN conference, pp 148–155

Erlwanger SH (1973) Benny’s conception of rules and answers in IPI mathematics. JCMB 1(2):7–26

Fleck A (2007) Prolog as the first programming language. ACM SIGCSE Bull 39(4):61–64

Forišek M, Steinová M (2010) Didactic games for teaching information theory. In: Hromkovič J, Královič R, Vahrenhold J (eds) Teaching fundamentals concepts of informatics. ISSEP 2010, Lecture notes in computer science, vol 5941. Springer, Berlin/Heidelberg, pp 86–99

Gal-Ezer J, Harel D (1998) What (else) should CS educators know? Commun ACM 41(9):77–84

Garner S, Haden P, Robins A (2005) My program is correct but it doesn’t run: a preliminary investigation of novice programmers’ problems. In: Young A, Tolhurst D (eds) Proceedings of the 7th Australasian conference on computing education, vol 42, pp 173–180

Haberman B, Ragonis N (2010) So different though so similar? – or vice versa? Exploration of the logic programming and the object-oriented programming paradigms. Iss Inform Sci Inf Technol 7:393–402

Hanks B (2008) Problems encountered by novice pair programmers. JERIC 7(4):1–13

Hauer A, Daniels M (2008) A learning theory perspective on running open ended group projects (OEGPs). In: Proceedings of the 10th conference Australasian Computing Education, vol 78. Australian Computer Society, Darlinghurst, Australia, pp 85–91

Hazzan O, Har-Shai G (2013) Teaching computer science soft skills as soft concepts. In: Proceeding of the 44th ACM technical symposium on computer science education (SIGCSE ‘13). ACM, New York, pp 59–64

Hazzan O, Har-Shai G (2014) Teaching and learning computer science soft skills using soft skills: the students’ perspective. In: Proceedings of the 45th ACM technical symposium on computer science education (SIGCSE ‘14). ACM, New York, pp 567–572

Ioannou I, Angeli C (2013) Teaching computer science in secondary education: a technological pedagogical content knowledge perspective. In: Proceedings of the 8th Workshop in Primary and Secondary Computing Education (WiPSE ‘13). ACM, New York, pp 1–7

Kaczmarczyk L C, Petrick E R, East J P et al (2010) Identifying student misconceptions of programming. In: Proceedings of the 41st ACM technical symposium on computer science education, pp 107–111

Karpierz K, Wolfman SA (2014) Misconceptions and concept inventory questions for binary search trees and hash tables. In: Proceedings of the 45th ACM technical symposium on computer science education (SIGCSE ‘14). ACM, New York, pp 109–114

Kölling M, Quig B, Patterson A et al (2003) The BlueJ system and its pedagogy. Comput Sci Educ 13(4):249–268

Krauskopf K, Zahn C, Hesse FW (2012, May) Leveraging the affordances of Youtube: The role of pedagogical knowledge and mental models of technology functions for lesson planning with technology. Comput Educ 58(4):1194–1206

Kurvinen E, Hellgren N, Kaila E, Laakso MJ, Salakoski T (2016) Programming misconceptions in an introductory level programming course exam. In: Proceedings of the 2016 ACM conference on Innovation and Technology in Computer Science Education (ITiCSE ‘16). ACM, New York, pp 308–313

Lapidot T, Ragonis N (2013) Supporting high school computer science teachers in writing academic papers. In: Proceedings of the 18th ACM conference on Innovation and Technology in Computer Science Education (ITiCSE ‘13). ACM, New York, pp 325–325

Lee YL (2011) The development of technological pedagogical content knowledge for science learning with a three-dimensional interactive computer simulation. Ph.D. Dissertation. University of Washington, Seattle, WA, USA. Advisor(s) Mark Windschitl. AAI3472171

Marinus A, Powell Z, Thornton R, McArthur G, Crain S (2018) Unravelling the cognition of coding in 3-to-6-year olds: the development of an assessment tool and the relation between coding ability and cognitive compiling of syntax in natural language. In: Proceedings of the 2018 ACM conference on International Computing Education Research (ICER ‘18). ACM, New York, pp 133–141

McCauleya R, Fitzgeraldb S, Lewandowskic G et al (2008) Debugging: a review of the literature from an educational perspective. Comput Sci Educ 18(2):67–92

Miller B (2007) Exploring Python as a learning and teaching language. J Comput Small Coll 22(3):262–263

Miller D, Soh LK, Chiriacescu V, Ingraham E, Shell DF, Hazley MP (2014) Integrating computational and creative thinking to improve learning and performance in CS1. In: Proceedings of the 45th ACM technical symposium on computer science education (SIGCSE ‘14). ACM, New York, pp 475–480

Mittermeir RT, Bischof E, Hodnigg K (2010) Showing core-concepts of informatics to kids and their teachers. In: Hromkovič J, Královič R, Vahrenhold J (eds) Teaching fundamentals concepts of informatics. ISSEP 2010, Lecture Notes in Computer Science, vol 5941. Springer, Berlin/Heidelberg, pp 143–154

Moritz SH, Blank GD (2005) A design-first curriculum for teaching Java in a CS1 course. ACM SIGCSE Bull 37(2):89–93

Mouza C, Karchmer-Klein R, Nandakumar R, Ozden SY, Hu L (2014, February) Investigating the impact of an integrated approach to the development of preservice teachers’ technological pedagogical content knowledge (TPACK). Comput Educ 71:206–221

Murphy L, Lewandowski G, McCauley R et al (2008) Debugging: the good, the bad, and the quirky: a qualitative analysis of novices’ strategies. In: Proceedings of the 39th ACM technical symposium on computer science education, pp 163–167

Ni L (2009) What makes CS teachers change?: factors influencing CS teachers’ adoption of curriculum innovations. In: Proceedings of the 40th ACM technical symposium on computer science education, pp 544–548

Paul O, Vahrenhold J (2013) Hunting high and low: instruments to detect misconceptions related to algorithms and data structures. In: Proceeding of the 44th ACM technical symposium on computer science education (SIGCSE ‘13). ACM, New York, pp 29–34

Perkins DN, Martin F (1986) Fragile knowledge and neglected strategies in novice programmers. In: Soloway E, Iyengar S (eds) Empirical studies of programmers. Ablex Pub, Norwood, pp 213–229

Qian Y, Lehman J (2017, October) Students’ misconceptions and other difficulties in introductory programming: a literature review. ACM Trans Comput Educ 18(1):Article 1

Ragonis N (2010) A pedagogical approach to discussing fundamental object-oriented programming principles using the ADT SET. ACM Inroads 1(2):42–52

Ragonis N, Ben-Ari M (2005, February) On understanding the statics and dynamics of object-oriented programs. SIGCSE Bull 37(1):226–230

Ragonis N, Hazzan O (2019) What are computer science educators interested in? In: The CASE of SIGCSE conferences. ISSEP 2019, to be published in Lecture notes in computer science

Ragonis N, Shmallo R (2017) On the (Mis) understanding of the “this” reference. In: Proceedings of the 2017 ACM SIGCSE technical symposium on computer science education (SIGCSE ‘17). ACM, New York, pp 489–494

Ragonis N, Shmallo R (2018) A diagnostic tool for assessing Students’ perceptions and misconceptions regards the current object “this”. In: ISSEP 11th international conference on informatics in schools: situation, evolution, and perspectives, ISSEP 2018, St. Petersburg, Russia, October 10–12, 2018

Resnick M, Maloney J, Monroy-Hernández A et al (2009) Scratch: programming for all. Commun ACM 52(11):60–67

Rich KM, Binkowski TA, Strickland C, Franklin D (2018) Decomposition: A K-8 computational thinking learning trajectory. In: Proceedings of the 2018 ACM conference on international computing education research (ICER ‘18). ACM, New York, pp 124–132

Rodger SH, Bashford M, Dyck L et al (2010) Enhancing K-12 education with Alice programming adventures. In: Proceedings of ITiCSE, pp 234–238

Samurçay R (1989) The concept of variable in programming: its meaning and use in problem-solving by novice programmers. In: Soloway E, Spohrer JC (eds) Studying the novice programmer. Lawrence Erlbaum Associates, Mahwah, pp 161–178

Shulman LS (1986) Those who understand: knowledge growth in teaching. J Educ Teach 15(2):4–14

Shulman LS (1990) Reconnecting foundations to the substance of teacher education. Teach Coll Rec 91(3):300–310

Simon B, Parris J, Spacco J (2013) How we teach impacts student learning: peer instruction vs. lecture in CS0. In: Proceeding of the 44th ACM technical symposium on computer science education (SIGCSE ‘13)

Smith JP III, diSessa AA, Roschelle J (1993) Misconceptions reconceived: a constructivist analysis of knowledge in transition. J Learn Sci 3(2):115–163

Soh L, Samal A, Nugent G (2005) A framework for CS1 closed laboratories. JERIC 5(4):2

Stolin Y, Hazzan O (2007) Students’ understanding of computer science soft ideas: the case of programming paradigm. ACM SIGCSE Bull 39(2):65–69

Tashakkori RM, Parry RM, Benoit A, Cooper RA, Jenkins JL, Westveer NT (2014) Research experience for teachers: data analysis & mining, visualization, and image processing. In: Proceedings of the 45th ACM technical symposium on computer science education (SIGCSE ‘14). ACM, New York, pp 193–198

Van Roy P, Armstrong J, Flatt M et al (2003) The role of language paradigms in teaching programming. In: Proceedings of the 34th SIGCSE technical symposium on computer science education, pp 269–270

Voyles MM, Haller SM, Fossum TV (2007) Teacher responses to student gender differences. In: Proceedings of the 12th annual SIGCSE conference on innovation and technology in computer science education, pp 226–230

Watson C, Li FWB, Godwin JL (2014) No tests required: comparing traditional and dynamic predictors of programming success. In: Proceedings of the 45th ACM technical symposium on computer science education (SIGCSE ‘14)

Won Hur J (2019) Too much technology (?): pre-service Teachers’ perceptions, concerns, and interest in CS education. In: Proceedings of the 2019 ACM conference on International Computing Education Research (ICER ‘19). ACM, New York, pp 305–305

Download references

Author information

Authors and affiliations.

Department of Education in Science & Technology, Technion–Israel Institute of Technology, Haifa, Israel

Orit Hazzan & Tami Lapidot

Faculty of Education, Beit Berl College, Doar Beit Berl, Israel

Noa Ragonis

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Hazzan, O., Ragonis, N., Lapidot, T. (2020). Research in Computer Science Education. In: Guide to Teaching Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-030-39360-1_7

Download citation

DOI : https://doi.org/10.1007/978-3-030-39360-1_7

Published : 06 August 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-39359-5

Online ISBN : 978-3-030-39360-1

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Privacy Policy

Research Method

Home » 500+ Computer Science Research Topics

500+ Computer Science Research Topics

Computer Science Research Topics

Computer Science is a constantly evolving field that has transformed the world we live in today. With new technologies emerging every day, there are countless research opportunities in this field. Whether you are interested in artificial intelligence, machine learning, cybersecurity, data analytics, or computer networks, there are endless possibilities to explore. In this post, we will delve into some of the most interesting and important research topics in Computer Science. From the latest advancements in programming languages to the development of cutting-edge algorithms, we will explore the latest trends and innovations that are shaping the future of Computer Science. So, whether you are a student or a professional, read on to discover some of the most exciting research topics in this dynamic and rapidly expanding field.

Computer Science Research Topics

Computer Science Research Topics are as follows:

  • Using machine learning to detect and prevent cyber attacks
  • Developing algorithms for optimized resource allocation in cloud computing
  • Investigating the use of blockchain technology for secure and decentralized data storage
  • Developing intelligent chatbots for customer service
  • Investigating the effectiveness of deep learning for natural language processing
  • Developing algorithms for detecting and removing fake news from social media
  • Investigating the impact of social media on mental health
  • Developing algorithms for efficient image and video compression
  • Investigating the use of big data analytics for predictive maintenance in manufacturing
  • Developing algorithms for identifying and mitigating bias in machine learning models
  • Investigating the ethical implications of autonomous vehicles
  • Developing algorithms for detecting and preventing cyberbullying
  • Investigating the use of machine learning for personalized medicine
  • Developing algorithms for efficient and accurate speech recognition
  • Investigating the impact of social media on political polarization
  • Developing algorithms for sentiment analysis in social media data
  • Investigating the use of virtual reality in education
  • Developing algorithms for efficient data encryption and decryption
  • Investigating the impact of technology on workplace productivity
  • Developing algorithms for detecting and mitigating deepfakes
  • Investigating the use of artificial intelligence in financial trading
  • Developing algorithms for efficient database management
  • Investigating the effectiveness of online learning platforms
  • Developing algorithms for efficient and accurate facial recognition
  • Investigating the use of machine learning for predicting weather patterns
  • Developing algorithms for efficient and secure data transfer
  • Investigating the impact of technology on social skills and communication
  • Developing algorithms for efficient and accurate object recognition
  • Investigating the use of machine learning for fraud detection in finance
  • Developing algorithms for efficient and secure authentication systems
  • Investigating the impact of technology on privacy and surveillance
  • Developing algorithms for efficient and accurate handwriting recognition
  • Investigating the use of machine learning for predicting stock prices
  • Developing algorithms for efficient and secure biometric identification
  • Investigating the impact of technology on mental health and well-being
  • Developing algorithms for efficient and accurate language translation
  • Investigating the use of machine learning for personalized advertising
  • Developing algorithms for efficient and secure payment systems
  • Investigating the impact of technology on the job market and automation
  • Developing algorithms for efficient and accurate object tracking
  • Investigating the use of machine learning for predicting disease outbreaks
  • Developing algorithms for efficient and secure access control
  • Investigating the impact of technology on human behavior and decision making
  • Developing algorithms for efficient and accurate sound recognition
  • Investigating the use of machine learning for predicting customer behavior
  • Developing algorithms for efficient and secure data backup and recovery
  • Investigating the impact of technology on education and learning outcomes
  • Developing algorithms for efficient and accurate emotion recognition
  • Investigating the use of machine learning for improving healthcare outcomes
  • Developing algorithms for efficient and secure supply chain management
  • Investigating the impact of technology on cultural and societal norms
  • Developing algorithms for efficient and accurate gesture recognition
  • Investigating the use of machine learning for predicting consumer demand
  • Developing algorithms for efficient and secure cloud storage
  • Investigating the impact of technology on environmental sustainability
  • Developing algorithms for efficient and accurate voice recognition
  • Investigating the use of machine learning for improving transportation systems
  • Developing algorithms for efficient and secure mobile device management
  • Investigating the impact of technology on social inequality and access to resources
  • Machine learning for healthcare diagnosis and treatment
  • Machine Learning for Cybersecurity
  • Machine learning for personalized medicine
  • Cybersecurity threats and defense strategies
  • Big data analytics for business intelligence
  • Blockchain technology and its applications
  • Human-computer interaction in virtual reality environments
  • Artificial intelligence for autonomous vehicles
  • Natural language processing for chatbots
  • Cloud computing and its impact on the IT industry
  • Internet of Things (IoT) and smart homes
  • Robotics and automation in manufacturing
  • Augmented reality and its potential in education
  • Data mining techniques for customer relationship management
  • Computer vision for object recognition and tracking
  • Quantum computing and its applications in cryptography
  • Social media analytics and sentiment analysis
  • Recommender systems for personalized content delivery
  • Mobile computing and its impact on society
  • Bioinformatics and genomic data analysis
  • Deep learning for image and speech recognition
  • Digital signal processing and audio processing algorithms
  • Cloud storage and data security in the cloud
  • Wearable technology and its impact on healthcare
  • Computational linguistics for natural language understanding
  • Cognitive computing for decision support systems
  • Cyber-physical systems and their applications
  • Edge computing and its impact on IoT
  • Machine learning for fraud detection
  • Cryptography and its role in secure communication
  • Cybersecurity risks in the era of the Internet of Things
  • Natural language generation for automated report writing
  • 3D printing and its impact on manufacturing
  • Virtual assistants and their applications in daily life
  • Cloud-based gaming and its impact on the gaming industry
  • Computer networks and their security issues
  • Cyber forensics and its role in criminal investigations
  • Machine learning for predictive maintenance in industrial settings
  • Augmented reality for cultural heritage preservation
  • Human-robot interaction and its applications
  • Data visualization and its impact on decision-making
  • Cybersecurity in financial systems and blockchain
  • Computer graphics and animation techniques
  • Biometrics and its role in secure authentication
  • Cloud-based e-learning platforms and their impact on education
  • Natural language processing for machine translation
  • Machine learning for predictive maintenance in healthcare
  • Cybersecurity and privacy issues in social media
  • Computer vision for medical image analysis
  • Natural language generation for content creation
  • Cybersecurity challenges in cloud computing
  • Human-robot collaboration in manufacturing
  • Data mining for predicting customer churn
  • Artificial intelligence for autonomous drones
  • Cybersecurity risks in the healthcare industry
  • Machine learning for speech synthesis
  • Edge computing for low-latency applications
  • Virtual reality for mental health therapy
  • Quantum computing and its applications in finance
  • Biomedical engineering and its applications
  • Cybersecurity in autonomous systems
  • Machine learning for predictive maintenance in transportation
  • Computer vision for object detection in autonomous driving
  • Augmented reality for industrial training and simulations
  • Cloud-based cybersecurity solutions for small businesses
  • Natural language processing for knowledge management
  • Machine learning for personalized advertising
  • Cybersecurity in the supply chain management
  • Cybersecurity risks in the energy sector
  • Computer vision for facial recognition
  • Natural language processing for social media analysis
  • Machine learning for sentiment analysis in customer reviews
  • Explainable Artificial Intelligence
  • Quantum Computing
  • Blockchain Technology
  • Human-Computer Interaction
  • Natural Language Processing
  • Cloud Computing
  • Robotics and Automation
  • Augmented Reality and Virtual Reality
  • Cyber-Physical Systems
  • Computational Neuroscience
  • Big Data Analytics
  • Computer Vision
  • Cryptography and Network Security
  • Internet of Things
  • Computer Graphics and Visualization
  • Artificial Intelligence for Game Design
  • Computational Biology
  • Social Network Analysis
  • Bioinformatics
  • Distributed Systems and Middleware
  • Information Retrieval and Data Mining
  • Computer Networks
  • Mobile Computing and Wireless Networks
  • Software Engineering
  • Database Systems
  • Parallel and Distributed Computing
  • Human-Robot Interaction
  • Intelligent Transportation Systems
  • High-Performance Computing
  • Cyber-Physical Security
  • Deep Learning
  • Sensor Networks
  • Multi-Agent Systems
  • Human-Centered Computing
  • Wearable Computing
  • Knowledge Representation and Reasoning
  • Adaptive Systems
  • Brain-Computer Interface
  • Health Informatics
  • Cognitive Computing
  • Cybersecurity and Privacy
  • Internet Security
  • Cybercrime and Digital Forensics
  • Cloud Security
  • Cryptocurrencies and Digital Payments
  • Machine Learning for Natural Language Generation
  • Cognitive Robotics
  • Neural Networks
  • Semantic Web
  • Image Processing
  • Cyber Threat Intelligence
  • Secure Mobile Computing
  • Cybersecurity Education and Training
  • Privacy Preserving Techniques
  • Cyber-Physical Systems Security
  • Virtualization and Containerization
  • Machine Learning for Computer Vision
  • Network Function Virtualization
  • Cybersecurity Risk Management
  • Information Security Governance
  • Intrusion Detection and Prevention
  • Biometric Authentication
  • Machine Learning for Predictive Maintenance
  • Security in Cloud-based Environments
  • Cybersecurity for Industrial Control Systems
  • Smart Grid Security
  • Software Defined Networking
  • Quantum Cryptography
  • Security in the Internet of Things
  • Natural language processing for sentiment analysis
  • Blockchain technology for secure data sharing
  • Developing efficient algorithms for big data analysis
  • Cybersecurity for internet of things (IoT) devices
  • Human-robot interaction for industrial automation
  • Image recognition for autonomous vehicles
  • Social media analytics for marketing strategy
  • Quantum computing for solving complex problems
  • Biometric authentication for secure access control
  • Augmented reality for education and training
  • Intelligent transportation systems for traffic management
  • Predictive modeling for financial markets
  • Cloud computing for scalable data storage and processing
  • Virtual reality for therapy and mental health treatment
  • Data visualization for business intelligence
  • Recommender systems for personalized product recommendations
  • Speech recognition for voice-controlled devices
  • Mobile computing for real-time location-based services
  • Neural networks for predicting user behavior
  • Genetic algorithms for optimization problems
  • Distributed computing for parallel processing
  • Internet of things (IoT) for smart cities
  • Wireless sensor networks for environmental monitoring
  • Cloud-based gaming for high-performance gaming
  • Social network analysis for identifying influencers
  • Autonomous systems for agriculture
  • Robotics for disaster response
  • Data mining for customer segmentation
  • Computer graphics for visual effects in movies and video games
  • Virtual assistants for personalized customer service
  • Natural language understanding for chatbots
  • 3D printing for manufacturing prototypes
  • Artificial intelligence for stock trading
  • Machine learning for weather forecasting
  • Biomedical engineering for prosthetics and implants
  • Cybersecurity for financial institutions
  • Machine learning for energy consumption optimization
  • Computer vision for object tracking
  • Natural language processing for document summarization
  • Wearable technology for health and fitness monitoring
  • Internet of things (IoT) for home automation
  • Reinforcement learning for robotics control
  • Big data analytics for customer insights
  • Machine learning for supply chain optimization
  • Natural language processing for legal document analysis
  • Artificial intelligence for drug discovery
  • Computer vision for object recognition in robotics
  • Data mining for customer churn prediction
  • Autonomous systems for space exploration
  • Robotics for agriculture automation
  • Machine learning for predicting earthquakes
  • Natural language processing for sentiment analysis in customer reviews
  • Big data analytics for predicting natural disasters
  • Internet of things (IoT) for remote patient monitoring
  • Blockchain technology for digital identity management
  • Machine learning for predicting wildfire spread
  • Computer vision for gesture recognition
  • Natural language processing for automated translation
  • Big data analytics for fraud detection in banking
  • Internet of things (IoT) for smart homes
  • Robotics for warehouse automation
  • Machine learning for predicting air pollution
  • Natural language processing for medical record analysis
  • Augmented reality for architectural design
  • Big data analytics for predicting traffic congestion
  • Machine learning for predicting customer lifetime value
  • Developing algorithms for efficient and accurate text recognition
  • Natural Language Processing for Virtual Assistants
  • Natural Language Processing for Sentiment Analysis in Social Media
  • Explainable Artificial Intelligence (XAI) for Trust and Transparency
  • Deep Learning for Image and Video Retrieval
  • Edge Computing for Internet of Things (IoT) Applications
  • Data Science for Social Media Analytics
  • Cybersecurity for Critical Infrastructure Protection
  • Natural Language Processing for Text Classification
  • Quantum Computing for Optimization Problems
  • Machine Learning for Personalized Health Monitoring
  • Computer Vision for Autonomous Driving
  • Blockchain Technology for Supply Chain Management
  • Augmented Reality for Education and Training
  • Natural Language Processing for Sentiment Analysis
  • Machine Learning for Personalized Marketing
  • Big Data Analytics for Financial Fraud Detection
  • Cybersecurity for Cloud Security Assessment
  • Artificial Intelligence for Natural Language Understanding
  • Blockchain Technology for Decentralized Applications
  • Virtual Reality for Cultural Heritage Preservation
  • Natural Language Processing for Named Entity Recognition
  • Machine Learning for Customer Churn Prediction
  • Big Data Analytics for Social Network Analysis
  • Cybersecurity for Intrusion Detection and Prevention
  • Artificial Intelligence for Robotics and Automation
  • Blockchain Technology for Digital Identity Management
  • Virtual Reality for Rehabilitation and Therapy
  • Natural Language Processing for Text Summarization
  • Machine Learning for Credit Risk Assessment
  • Big Data Analytics for Fraud Detection in Healthcare
  • Cybersecurity for Internet Privacy Protection
  • Artificial Intelligence for Game Design and Development
  • Blockchain Technology for Decentralized Social Networks
  • Virtual Reality for Marketing and Advertising
  • Natural Language Processing for Opinion Mining
  • Machine Learning for Anomaly Detection
  • Big Data Analytics for Predictive Maintenance in Transportation
  • Cybersecurity for Network Security Management
  • Artificial Intelligence for Personalized News and Content Delivery
  • Blockchain Technology for Cryptocurrency Mining
  • Virtual Reality for Architectural Design and Visualization
  • Natural Language Processing for Machine Translation
  • Machine Learning for Automated Image Captioning
  • Big Data Analytics for Stock Market Prediction
  • Cybersecurity for Biometric Authentication Systems
  • Artificial Intelligence for Human-Robot Interaction
  • Blockchain Technology for Smart Grids
  • Virtual Reality for Sports Training and Simulation
  • Natural Language Processing for Question Answering Systems
  • Machine Learning for Sentiment Analysis in Customer Feedback
  • Big Data Analytics for Predictive Maintenance in Manufacturing
  • Cybersecurity for Cloud-Based Systems
  • Artificial Intelligence for Automated Journalism
  • Blockchain Technology for Intellectual Property Management
  • Virtual Reality for Therapy and Rehabilitation
  • Natural Language Processing for Language Generation
  • Machine Learning for Customer Lifetime Value Prediction
  • Big Data Analytics for Predictive Maintenance in Energy Systems
  • Cybersecurity for Secure Mobile Communication
  • Artificial Intelligence for Emotion Recognition
  • Blockchain Technology for Digital Asset Trading
  • Virtual Reality for Automotive Design and Visualization
  • Natural Language Processing for Semantic Web
  • Machine Learning for Fraud Detection in Financial Transactions
  • Big Data Analytics for Social Media Monitoring
  • Cybersecurity for Cloud Storage and Sharing
  • Artificial Intelligence for Personalized Education
  • Blockchain Technology for Secure Online Voting Systems
  • Virtual Reality for Cultural Tourism
  • Natural Language Processing for Chatbot Communication
  • Machine Learning for Medical Diagnosis and Treatment
  • Big Data Analytics for Environmental Monitoring and Management.
  • Cybersecurity for Cloud Computing Environments
  • Virtual Reality for Training and Simulation
  • Big Data Analytics for Sports Performance Analysis
  • Cybersecurity for Internet of Things (IoT) Devices
  • Artificial Intelligence for Traffic Management and Control
  • Blockchain Technology for Smart Contracts
  • Natural Language Processing for Document Summarization
  • Machine Learning for Image and Video Recognition
  • Blockchain Technology for Digital Asset Management
  • Virtual Reality for Entertainment and Gaming
  • Natural Language Processing for Opinion Mining in Online Reviews
  • Machine Learning for Customer Relationship Management
  • Big Data Analytics for Environmental Monitoring and Management
  • Cybersecurity for Network Traffic Analysis and Monitoring
  • Artificial Intelligence for Natural Language Generation
  • Blockchain Technology for Supply Chain Transparency and Traceability
  • Virtual Reality for Design and Visualization
  • Natural Language Processing for Speech Recognition
  • Machine Learning for Recommendation Systems
  • Big Data Analytics for Customer Segmentation and Targeting
  • Cybersecurity for Biometric Authentication
  • Artificial Intelligence for Human-Computer Interaction
  • Blockchain Technology for Decentralized Finance (DeFi)
  • Virtual Reality for Tourism and Cultural Heritage
  • Machine Learning for Cybersecurity Threat Detection and Prevention
  • Big Data Analytics for Healthcare Cost Reduction
  • Cybersecurity for Data Privacy and Protection
  • Artificial Intelligence for Autonomous Vehicles
  • Blockchain Technology for Cryptocurrency and Blockchain Security
  • Virtual Reality for Real Estate Visualization
  • Natural Language Processing for Question Answering
  • Big Data Analytics for Financial Markets Prediction
  • Cybersecurity for Cloud-Based Machine Learning Systems
  • Artificial Intelligence for Personalized Advertising
  • Blockchain Technology for Digital Identity Verification
  • Virtual Reality for Cultural and Language Learning
  • Natural Language Processing for Semantic Analysis
  • Machine Learning for Business Forecasting
  • Big Data Analytics for Social Media Marketing
  • Artificial Intelligence for Content Generation
  • Blockchain Technology for Smart Cities
  • Virtual Reality for Historical Reconstruction
  • Natural Language Processing for Knowledge Graph Construction
  • Machine Learning for Speech Synthesis
  • Big Data Analytics for Traffic Optimization
  • Artificial Intelligence for Social Robotics
  • Blockchain Technology for Healthcare Data Management
  • Virtual Reality for Disaster Preparedness and Response
  • Natural Language Processing for Multilingual Communication
  • Machine Learning for Emotion Recognition
  • Big Data Analytics for Human Resources Management
  • Cybersecurity for Mobile App Security
  • Artificial Intelligence for Financial Planning and Investment
  • Blockchain Technology for Energy Management
  • Virtual Reality for Cultural Preservation and Heritage.
  • Big Data Analytics for Healthcare Management
  • Cybersecurity in the Internet of Things (IoT)
  • Artificial Intelligence for Predictive Maintenance
  • Computational Biology for Drug Discovery
  • Virtual Reality for Mental Health Treatment
  • Machine Learning for Sentiment Analysis in Social Media
  • Human-Computer Interaction for User Experience Design
  • Cloud Computing for Disaster Recovery
  • Quantum Computing for Cryptography
  • Intelligent Transportation Systems for Smart Cities
  • Cybersecurity for Autonomous Vehicles
  • Artificial Intelligence for Fraud Detection in Financial Systems
  • Social Network Analysis for Marketing Campaigns
  • Cloud Computing for Video Game Streaming
  • Machine Learning for Speech Recognition
  • Augmented Reality for Architecture and Design
  • Natural Language Processing for Customer Service Chatbots
  • Machine Learning for Climate Change Prediction
  • Big Data Analytics for Social Sciences
  • Artificial Intelligence for Energy Management
  • Virtual Reality for Tourism and Travel
  • Cybersecurity for Smart Grids
  • Machine Learning for Image Recognition
  • Augmented Reality for Sports Training
  • Natural Language Processing for Content Creation
  • Cloud Computing for High-Performance Computing
  • Artificial Intelligence for Personalized Medicine
  • Virtual Reality for Architecture and Design
  • Augmented Reality for Product Visualization
  • Natural Language Processing for Language Translation
  • Cybersecurity for Cloud Computing
  • Artificial Intelligence for Supply Chain Optimization
  • Blockchain Technology for Digital Voting Systems
  • Virtual Reality for Job Training
  • Augmented Reality for Retail Shopping
  • Natural Language Processing for Sentiment Analysis in Customer Feedback
  • Cloud Computing for Mobile Application Development
  • Artificial Intelligence for Cybersecurity Threat Detection
  • Blockchain Technology for Intellectual Property Protection
  • Virtual Reality for Music Education
  • Machine Learning for Financial Forecasting
  • Augmented Reality for Medical Education
  • Natural Language Processing for News Summarization
  • Cybersecurity for Healthcare Data Protection
  • Artificial Intelligence for Autonomous Robots
  • Virtual Reality for Fitness and Health
  • Machine Learning for Natural Language Understanding
  • Augmented Reality for Museum Exhibits
  • Natural Language Processing for Chatbot Personality Development
  • Cloud Computing for Website Performance Optimization
  • Artificial Intelligence for E-commerce Recommendation Systems
  • Blockchain Technology for Supply Chain Traceability
  • Virtual Reality for Military Training
  • Augmented Reality for Advertising
  • Natural Language Processing for Chatbot Conversation Management
  • Cybersecurity for Cloud-Based Services
  • Artificial Intelligence for Agricultural Management
  • Blockchain Technology for Food Safety Assurance
  • Virtual Reality for Historical Reenactments
  • Machine Learning for Cybersecurity Incident Response.
  • Secure Multiparty Computation
  • Federated Learning
  • Internet of Things Security
  • Blockchain Scalability
  • Quantum Computing Algorithms
  • Explainable AI
  • Data Privacy in the Age of Big Data
  • Adversarial Machine Learning
  • Deep Reinforcement Learning
  • Online Learning and Streaming Algorithms
  • Graph Neural Networks
  • Automated Debugging and Fault Localization
  • Mobile Application Development
  • Software Engineering for Cloud Computing
  • Cryptocurrency Security
  • Edge Computing for Real-Time Applications
  • Natural Language Generation
  • Virtual and Augmented Reality
  • Computational Biology and Bioinformatics
  • Internet of Things Applications
  • Robotics and Autonomous Systems
  • Explainable Robotics
  • 3D Printing and Additive Manufacturing
  • Distributed Systems
  • Parallel Computing
  • Data Center Networking
  • Data Mining and Knowledge Discovery
  • Information Retrieval and Search Engines
  • Network Security and Privacy
  • Cloud Computing Security
  • Data Analytics for Business Intelligence
  • Neural Networks and Deep Learning
  • Reinforcement Learning for Robotics
  • Automated Planning and Scheduling
  • Evolutionary Computation and Genetic Algorithms
  • Formal Methods for Software Engineering
  • Computational Complexity Theory
  • Bio-inspired Computing
  • Computer Vision for Object Recognition
  • Automated Reasoning and Theorem Proving
  • Natural Language Understanding
  • Machine Learning for Healthcare
  • Scalable Distributed Systems
  • Sensor Networks and Internet of Things
  • Smart Grids and Energy Systems
  • Software Testing and Verification
  • Web Application Security
  • Wireless and Mobile Networks
  • Computer Architecture and Hardware Design
  • Digital Signal Processing
  • Game Theory and Mechanism Design
  • Multi-agent Systems
  • Evolutionary Robotics
  • Quantum Machine Learning
  • Computational Social Science
  • Explainable Recommender Systems.
  • Artificial Intelligence and its applications
  • Cloud computing and its benefits
  • Cybersecurity threats and solutions
  • Internet of Things and its impact on society
  • Virtual and Augmented Reality and its uses
  • Blockchain Technology and its potential in various industries
  • Web Development and Design
  • Digital Marketing and its effectiveness
  • Big Data and Analytics
  • Software Development Life Cycle
  • Gaming Development and its growth
  • Network Administration and Maintenance
  • Machine Learning and its uses
  • Data Warehousing and Mining
  • Computer Architecture and Design
  • Computer Graphics and Animation
  • Quantum Computing and its potential
  • Data Structures and Algorithms
  • Computer Vision and Image Processing
  • Robotics and its applications
  • Operating Systems and its functions
  • Information Theory and Coding
  • Compiler Design and Optimization
  • Computer Forensics and Cyber Crime Investigation
  • Distributed Computing and its significance
  • Artificial Neural Networks and Deep Learning
  • Cloud Storage and Backup
  • Programming Languages and their significance
  • Computer Simulation and Modeling
  • Computer Networks and its types
  • Information Security and its types
  • Computer-based Training and eLearning
  • Medical Imaging and its uses
  • Social Media Analysis and its applications
  • Human Resource Information Systems
  • Computer-Aided Design and Manufacturing
  • Multimedia Systems and Applications
  • Geographic Information Systems and its uses
  • Computer-Assisted Language Learning
  • Mobile Device Management and Security
  • Data Compression and its types
  • Knowledge Management Systems
  • Text Mining and its uses
  • Cyber Warfare and its consequences
  • Wireless Networks and its advantages
  • Computer Ethics and its importance
  • Computational Linguistics and its applications
  • Autonomous Systems and Robotics
  • Information Visualization and its importance
  • Geographic Information Retrieval and Mapping
  • Business Intelligence and its benefits
  • Digital Libraries and their significance
  • Artificial Life and Evolutionary Computation
  • Computer Music and its types
  • Virtual Teams and Collaboration
  • Computer Games and Learning
  • Semantic Web and its applications
  • Electronic Commerce and its advantages
  • Multimedia Databases and their significance
  • Computer Science Education and its importance
  • Computer-Assisted Translation and Interpretation
  • Ambient Intelligence and Smart Homes
  • Autonomous Agents and Multi-Agent Systems.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Statistics Research Topics

500+ Statistics Research Topics

Business Research Topics

500+ Business Research Topics

Medical Research Topic Ideas

500+ Medical Research Topic Ideas

Cyber Security Research Topics

500+ Cyber Security Research Topics

Astronomy Research Topics

500+ Astronomy Research Topics

Communication Research Topics

300+ Communication Research Topics

  • Directories
  • Degrees & Programs
  • Admission & Aid
  • Academic & Career Advising
  • Student Life
  • Computer Science and Engineering
  • Department Home
  • Events and Information
  • Computer Engineering (B.S.)
  • Computer Science (B.A.)
  • Computer Science (B.S.)
  • Information Technology and Cybersecurity (B.S.)
  • Computer Science (Minor)
  • Computing and Information Technology (Minor)
  • Combined Undergraduate and Graduate Degrees
  • Computer Science Essentials (Certificate)
  • Cyber Security Analytics (Certificate)
  • Computer Engineering (M.S.)
  • Computer Science (M.S.)
  • Cyber Security (M.S.)
  • Data Science (M.S.)
  • Satisfying the M.S. Requirements
  • Computer Science and Engineering (Ph.D.)
  • Graduate Program Prerequisites
  • Big and Smart Data (Certificate)
  • Cybersecurity Analytics (Certificate)
  • Departmental Honors Program
  • Computer Science Fundamentals Course for Prospective Graduate Students
  • Course Descriptions
  • Amazon Web Services (AWS)
  • Azure for Education
  • Virtual Cyber Security Lab (Courses) (Page has submenu)

Areas of Research

  • Active Funded Research
  • Research Forums
  • Senior Design Projects
  • Cyber Research and Education Center (CREC) (Off-site resource)
  • Kno.e.sis (Off-site resource)
  • Student Clubs and Organizations
  • Forms and Documents
  • Frequently Asked Questions
  • Graduate Research and Teaching Assistantships
  • Satisfactory Academic Progress (Off-site resource)
  • Undergraduate Thesis Information
  • Faculty and Staff Directory
  • Department News

On this page:

  • Assistive Technologies and Learning with Disabilities

Biomedical Informatics

Biomed imaging and visualization, cloud computing, cybersecurity, cyber-physical systems, databases and data mining.

  • Data Science and Analytics

Multimedia Systems and Apps

  • Semantic, Social and Sensor Web
  • Machine Learning and Artificial Intelligence

Wireless Networking and Security

Assistive technologies and learning with disabilities.

"Disabilities can be very traumatic, leading to frustration and depression," according to the American Foundation for the Blind. The rate of unemployment among legally blind individuals of working age residing in the United States greatly exceeds the unemployment rate for individuals with no functional limitations. Clever devices and information technology engineering strategies can be developed to help people overcome barriers to pursue educational and professional opportunities that will allow them to become productive members of the society.

Current Research Projects

  • Reading devices for the blind and visually impaired
  • Navigation devices for the blind
  • Multimodal forms of representation for virtual learning environments
  • Rehabilitation Assistants

Researchers

  • Nikolaos Bourbakis

Research Labs

  • Center of Assistive Research Technologies (CART)

Bioinformatics advances fundamental concepts in molecular biology, biochemistry, and computer science to help further understanding of basic DNA, genes, and protein structures it relates to mechanisms for drug development and treatment of diseases.

  • Metabolomics and toxicology
  • Trends in molecular evolution
  • Automation of forensic DNA analysis
  • Indexing genomic databases
  • Stochastic reaction modeling
  • Search optimization
  • National model for bioinformatics education
  • Disease analysis
  • Travis Doom
  • Guozhu Dong
  • Michael Raymer
  • Tanvi Banerjee
  • T.K. Prasad
  • Bioinformatic Research Group

Biomedical imaging and visualization research has become a very active research field during the last two decades, offering unique solutions for a great variety of biological and biomedical problems. Analysis and visualization of medical images facilitates diagnosis and treatment planning. Visualization systems used as surgical navigation systems enable precise and minimally invasive surgery.

  • Image registration in surgical navigation
  • Segmentation of MR and CT images for spinal surgery
  • Design of a surgical robot assistance for biopsy
  • Detection and visualization of brain shift during brain surgery
  • Automated endoscopic imaging
  • EEG+fMRI Modeling of the Brain
  • Ultrasound Modeling of Human organs (heart, liver)
  • Bio-signatures of in-vivo cells
  • Thomas Wischgoll
  • Advanced Visual Data Analysis (AViDA)

Cloud computing is a major step toward organizing all aspects of computation as a public utility service. It embraces concepts such as software as a service and platform as a service, including services for workflow facilities, application design and development, deployment and hosting services, data integration, and management of software. The cloud platform increases in importance as our industry makes the phase change from in-house data management to cloud-hosted data management to improve efficiency and focus on core businesses. However, like any new technology, there are formidable problems, from performance issues to security and privacy, from metadata management to massively parallel execution.

This is a major part of the Kno.e.sis Research Center.

  • Cloud infrastructure for data management
  • Privacy and security in cloud data management
  • Cloud-based mining and learning algorithms
  • Cloud support for text mining and web search
  • Large-scale natural language modeling and translation
  • Parallel and distributed algorithms for bioinformatics
  • Performance evaluation and benchmarking
  • Database Research Laboratory
  • Bioinformatics Research Group

The Department of Computer Science and Engineering of Wright State University recently received a grant, titled "REU Site: Cybersecurity Research at Wright State University", from the National Science Foundation. This NSF REU site offers a ten-week summer program that aims at providing a diverse group of motivated undergraduates with competitive research experiences in cyber-security research. A variety of projects will be offered in Network Security, Intrusion Detection, Wireless Sensor Network Security, Internet Malware Detection, Analysis, and Mitigation, Software Reverse Engineering and Vulnerability Discovery, and Privacy-Preserving Data Mining. More information of this REU Site can be found at http://reu.cs.wright.edu .  

In addition there are two ongoing projects sponsored by DARPA and ONR for Deepfake techniques, Deep Understanding of Technical Documents, and Computer Security (like memory attacks).

  • Junjie Zhang
  • WSU Cybersecurity Lab

Related Programs

  • Master of Science in Cybersecurity
  • Undergraduate

Cyber-Physical Systems are jointly physical and computational and are characterized by complex loops of cause and effect between the computational and physical components. We focus on the creation of methods by which such systems can self-adapt to repair damage and exploit opportunities and methods by which we can explain and understand how they operate even after having diverged from their original forms. Our current application area the creation of control systems for insect-like flapping-wing air vehicles that repair themselves, in flight, after suffering wing damage.

Click here for more information about Cyber Physical Systems at Wright State University

Data mining is the process of extracting useful knowledge from a database. Data mining facilitates the characterization, classification, clustering, and searching of different databases, including text data, image and video data, and bioinformatics data for various applications. Text, multimedia, and bioinformatics databases are very large and so parallel/distributed data mining is essential for scalable performance.

  • Parallel/distributed data mining
  • Text/image clustering and categorization
  • Metadata for timelining events
  • XML database
  • Data warehousing
  • Biological/medical data mining
  • Data Mining Research Lab

Data Science and Analytics

Mathematical, statistical, and graphical methods for exploring large and complex data sets.  Methods include statistical pattern recognition, multivariate data analysis, classifiers, modeling and simulation, and scientific visualization.

  • Topological Data Analysis
  • Predictive Analytics
  • Michelle Cheatham
  • Machine Learning and Complex Systems Lab
  • Data Science for Healthcare

Multimedia systems offer synergistic and integrated solutions to a great variety of applications related to multi-modality data, such as automatic target recognition, surveillance, tracking human behavior, etc.

  • Object recognition in digital images and video
  • Multimedia content classification and indexing
  • Integrated search and retrieval in multimedia repositories
  • Background elimination in live video
  • Modeling and visualization
  • Biometrics and cyber security
  • Network and security visualization

Semantic, Social and Sensor Webs

The World Wide Web contains rapidly growing amount of enterprise, social, device/sensor/IoT/WoT data in unstructured, semistructured and structured forms. The Semantic Web initiative by the World Wide Web consortium (W3C) of which Wright State University is a member (represented by Kno.e.sis) has developed standards and technologies to associate meaning to data, to make data more machine and human understandable, and to apply reasoning techniques for intelligent processing leading to actionable information, insights, and discovery. Kno.e.sis has one of the largest academic groups in the US in Semantic Web, and its applications for better use and analysis of social and sensor data.

  • Computer assisted document interpretation tools
  • Information extraction from semi-structured documents
  • Semantic Web knowledge representation
  • Semantic sensor web
  • Linked and Big Data

Machine Learning and Artificial Intelligence

Machine learning and artificial intelligence aim to develop computer systems that exhibit intelligent behavior in decision making, object recognition, planning, learning, and other applications that require intelligent assessment of complex information.  Our faculty apply modern tools such as deep neural networks, evolutionary algorithms, statistical inference, topological analysis, and graphical inference models to a wide variety of problems from engineering, science, and medicine.

  • Knowledge Representation and Reasoning
  • Intelligent agents
  • Natural language understanding
  • Evolutionary algorithms and evolvable hardware
  • Autonomous robotic systems
  • Machine learning
  • Fuzzy and neural systems
  • Intelligent control systems
  • Deep Neural Networks

Wireless communication and networking have revolutionized the way people communicate. Currently, there are more than two billion cellular telephone subscribers worldwide. Wireless local area networks have become a necessity in many parts of the globe. With new wireless enabled applications being proposed every day, such as wireless sensor networks, telemedicine, music telepresence, and intelligent web, the potential of this discipline is just being unleashed.

  • Ultra-high speed optical network
  • Wireless sensor network
  • Music telepresence
  • Cognitive radio and dynamic spectrum access
  • Secure protocol and secure processors authentication
  • Cyber-physical systems
  • Network coding

Take the Next Step

Finding the right college means finding the right fit. See all that the College of Engineering and Computer Science has to offer by visiting campus.

[email protected]

Engineering and Computer Science, College of

[email protected]

ASEE Diversity Recognition Program bronze

Departments and Programs

  • Biomedical, Industrial, and Human Factors Engineering
  • Electrical Engineering
  • Mechanical and Materials Engineering
  • Ph.D. in Engineering
  • Success Stories

About Wright State

  • Accreditation
  • National Recognition
  • Quick Facts
  • Academic Calendar

Information For

  • Counseling and Wellness
  • Disability Services
  • Human Resources
  • Information Technology (CaTS)
  • Parking and Transportation

Map of Wright State University Dayton and Lake Campuses

  • Make a Gift
  • Wright State Cares

Wright State University

  • X (formerly Twitter)
  • Copyright © 2024
  • Accessibility
  • Emergency Preparedness
  • Web Support

Watch Reimagine Education and learn what's new with responsible AI in education >

  • Career readiness skills
  • Published Mar 1, 2023

Why students need Computer Science to succeed

importance of research in computer science

  • Content type
  • Education decision makers

As technology continues to evolve at an accelerated pace, transforming the way we live and work in the process, we find ourselves navigating the challenges of an always-changing digital landscape. Understanding the principles of computing is quickly becoming an essential skill. It provides people with a keen understanding of how technology impacts their lives, empowers them to become full participants in society, and unlocks a wide range of career opportunities. This is especially true for today’s students, who will rely on computing skills throughout their lives, making it necessary for them to have opportunities to learn Computer Science (CS).

A report by LinkedIn and Microsoft revealed that 149 million new digital jobs will be created by 2025 in fields such as software development, data analysis, cybersecurity, and AI. However, education cannot currently meet the growing demand for people with CS skills. As of October 2022, only 33% of technology jobs worldwide were filled by the adequately skilled. And by 2030, the global shortage of tech workers will represent an $8.5 trillion loss in annual revenue, according to research cited by the International Monetary Fund i .

Around the world, technology is opening up opportunities for new ways to solve the challenges and needs of businesses and organizations, everything from technology-focused [industries] to agriculture, healthcare, financial services, transportation and so many more. They’re all struggling to find the talent they need to fill many of the jobs.” Christina Thoresen, Director of Worldwide Education Industry Sales Strategy at Microsoft

A growing interest in CS curricula

Learning coding and software development, two key parts of CS, has been shown to improve students’ creativity, critical thinking, math, and reasoning skills ii . CS skills like problem-solving iii and planning iv are transferable and can be applied across other subjects. A 2020 study examining the effects of CS courses on students’ academic careers in the United States showed that they have a significant impact on the likelihood of enrolling in college v . Moreover, CS can be useful for many courses and degrees including biology, chemistry, economics, engineering, geology, mathematics, materials science, medicine, physics, psychology, and sociology vi . 

CS curricula that are relevant and engaging provide an additional benefit in that they attract traditionally marginalized groups and girls and empower those with lower access to technological resources to develop high value skills, and unlock new and exciting career opportunities. It is also worth noting that due to enduring talent shortages, CS-related fields consistently offer above-average pay and have the fastest-growing wages vii .

How Microsoft supports CS implementation

 Microsoft has been helping educational institutions around the world develop rich CS curricula that empower all students with the skills they need to confidently transition from classroom to career. By creating content that is meaningful and engaging for all students, as well as helping promote equal access to CS in school, Microsoft is fulfilling its commitment to making learning more inclusive and equitable. One of the principal resources for this is Microsoft’s Computer Science Guide (MCSG) , a comprehensive CS framework that includes:

  • An implementation plan
  • Training for educators
  • Lesson and project suggestions
  • Practical guidance for coding activities
  • Certification

An important part of building up students’ CS capabilities is to engage learners as early as possible, which encourages and supports creative expression and the development of computational thinking skills. However, CS curriculums at the national level often focus on ICT or simple coding exercises and offer little in terms of immersive, hands-on experiences that feel relevant, authentic, and inclusive. The MCSG was made to engage students of all ages through a learner-centric curriculum using constructivism, hands-on activities, problem-solving, and inquiry-based approaches that are often linked to real-world challenges viii .

CS curriculum design can also help address a well-documented gender divide ix by engaging all students as early as primary school using relevant and meaningful content. It can ensure that all students have access to CS courses based on their needs and abilities, regardless of socio-economic status, race, ethnicity, or special learning needs. Additionally, as students are likely to encounter changes in technology that are difficult to imagine over the course of their education, another key goal of the MCSG is to be future-proof by incorporating subjects that are likely to be highly relevant well into the future.

Computer science skills are critical to succeed in today’s economy, but too many students – especially those from diverse backgrounds and experiences – are excluded from computer science. That’s why we’ve created a new resource guide which we hope will help teachers build inclusive computer science education programs.” Naria Santa Lucia, General Manager of Digital Inclusion and Community Engagement for Microsoft Philanthropies

Georgia Ministry of Education develops national CS program

In 2022, the Ministry of Education and Science of Georgia launched a pilot program to test how the Microsoft CS Curriculum could be integrated into primary classes as part of a national campaign to introduce broader CS concepts and computational thinking to K-12 learning. The pilot project focused on two ICT teachers and was reviewed by volunteer educators from other cities. An advisory board was formed consisting of experts from the National Curriculum Department.

The process involved translating the Foundation Phase of the Microsoft CS Curriculum Toolkit into Georgian, as well as weekly meetings to discuss progress. In the end, the teachers designed two curriculums for the 2nd and 3rd grades, and the project team made a recommendation for a completely new framework concept that considered the existing National Curriculum context, the integration of the Microsoft CS Curriculum Framework, as well as additional concepts from the Computer Science Teachers Association.

Learn more about computer science with Microsoft

It is no longer possible to ignore the critical importance of CS skills to students whose lives are going to revolve around their ability to understand and engage with technology, both at work and in their day-to-day. At Microsoft Education, our goal is to empower every learner on the planet to achieve more. That is why we are working together with governments and education leaders around the world to implement CS in schools and ensure that students feel included, supported, and empowered to confidently follow their passions and achieve great success both in their careers and in life.

  • Start building a CS curriculum using the Microsoft Computer Science Curriculum Toolkit .
  • To inspire a STEM passion in K-12 learners and teach them how to code with purpose, use Minecraft’s Computer Science Progression .
  • Find out how the Microsoft TEALS Program can help you create access to equitable, inclusive CS education and learn more about building inclusive economic growth .
  • Enlist one of Microsoft’s Global Training Partners to support your educators to incorporate CS into their curriculum and teaching practices.

i https://www.imf.org/en/Publications/fandd/issues/2019/03/global-competition-for-technology-workers-costa   

ii https://codeorg.medium.com/cs-helps-students-outperform-in-school-college-and-workplace-66dd64a69536   

iii Can Majoring in CS Improve General Problem-solving Skills?, ACM, Salehi et al., 2020   

iv The effects of coding on children’s planning and inhibition skills, Computers & Education, Arfé et al., 2020   

v http://www.westcoastanalytics.com/uploads/6/9/6/7/69675515/longitudinal_study_-_combined_report_final_3_10_20__jgq_.pdf   

vi https://www.hereford.ac.uk/explore-courses/courses/computer-science/   

vii https://www.thebalancemoney.com/average-salary-information-for-us-workers-2060808   

viii Kotsopoulos, D., Floyd, L., Khan, S., Namukasa, I.K., Somanath, S., Weber, J. & Yiu, C. (2017) A Pedagogical Framework for Computational Thinking. Digital Experiences in Mathematics Education vol. 3, pages 154–171(2017)     

ix https://www.theguardian.com/careers/2021/jun/28/why-arent-more-girls-in-the-uk-choosing-to-study-computing-and-technology  

Related Posts

importance of research in computer science

Inspiring students during Women’s History Month 2024  

importance of research in computer science

Stay ahead with 8 new updates from Microsoft Education  

importance of research in computer science

  • Customer stories

Streamline messaging with Dynamics 365 Customer Insights  

Ai in education brings opportunity to life.

Watch Reimagine Education

Connect with us on social

importance of research in computer science

Subscribe to our newsletter

Stay up to date with monthly newsletters from Microsoft Education.  

importance of research in computer science

School stories

Get inspired by stories from Microsoft Education customers.

importance of research in computer science

Microsoft Learn Educator Center

Expand possibilities with educator training and professional development resources.

importance of research in computer science

Contact sales

Connect with a Microsoft Education sales specialist to explore solutions for your school.

importance of research in computer science

Discover a collection of resources to support a variety of educational topics.

Why Is Computer Science Important?

Why Is Computer Science Important?

When starting out in Computer Science , you’ll eventually wonder “ Why is Computer Science important? ” Once you realize the importance of Comp Sci, I’m sure you’ll be just as passionate about it as I am. Sure, it’s a tough degree to complete. However, the benefits of studying Comp Sci definitely outweigh the struggles along the way. Not only will you have a great career, but you’ll be able to make a great impact on the world.

Computer Science is important for many reasons. However, the positive impact that computers have had on the world is the main reason that Computer Science is so important. Because of computers, we’ve been able to improve communication, transportation, healthcare, education, food production, increased our overall standard of living, and contributed to many other areas of innovation.

Top 6 Reasons Why Computer Science Is So Important

1. increased standards of living.

We all have incredibly powerful computers in our smartphones. The same capabilities that we take for granted are improving millions of lives in developing countries. If you have a phone, you also have access to a world of information and web services. It opens up unlimited opportunities. However, smartphones are just one way that computer science has increased our standards of living.

Computers have also massively increased automation which has improved our lives across the board. Not only do they help streamline the production of goods, but computers also help with the distribution of those goods and the managing of said business. And this isn’t just good news for the business. Not only do we have more choices as a consumer, but this automation reduces the cost of many goods.

The increased ease of production and reduced prices of basic necessities comes with reduced poverty. Automation continues to increase today and while it reduces the costs of products, it requires a smaller labor force. That means fewer jobs and potentially higher unemployment. This is one of the drawbacks of automation and another problem that humanity will have to face in the near future.

2. Better Communication

There are about half a dozen apps on my phone right now that I can use to communicate with someone. In a matter of seconds I can call, text, Facetime, Tweet, Snapchat, message on Facebook, Gmail, Whatsapp, Blackboard, or Discord. I’m communicating with thousands of people every month through this blog and Comp Sci Central’s YouTube channel .

This level of worldwide connection gives power to the smallest minority: the individual. That’s another one of the reasons why computer science is important. Technology further enhances democracy and gives everyone a voice.

3. Better Transportation

What used to be planes, trains, and automobiles is quickly becoming self-driving vehicles, hyper-loops, and starships. Without the continuous improvement of computers over the year, none of these would be possible. Technically, they’re not quite here yet. However, they’re well on their way and set to arrive this decade.

Even the latest transportation that we currently use relies heavily on computers and software. Computers controls everything from the brakes (with ABS), to the transmission, the engine, air conditioning, alarm systems, windows… you get the point.

Although fully-autonomous vehicles aren’t here yet, it shouldn’t be too much longer. Tesla particularly has made tremendous strides in the self-driving arena. Plus, Elon Musk recently announced that they reach reach “level-5” autonomy which will no longer require any driver input. The only thing you’ll have to do is tell it where to go and it will take you there.

4. Better Healthcare

Health records are stored on computers, making them organized and easier to access. Scheduling a visit to the doctor is just a few clicks away. The high-level of organization in the healthcare industry also allows for the treatment of more patients. The scaling of patient treatment is important as we aim to make quality healthcare available globally.

Another reason computer science is so important in the healthcare industry are the various technologies that improve survival rate in patients. For instance, there are robots that assist doctors in a surgery called Computer Assisted Surgical Systems . They’re minimally invasive so they’re safer for the patients, thus improving success rates. Additionally, it’s easier for the surgeon to perform.

5. More Accessible Education

If you own a computer and have access to the internet, then you can learn nearly anything known to humanity. Virtually anyone anywhere can simply turn on their preferred device and study their preferred subject. In most cases, we can even do this for free. Additionally, the same technology grants us all access to various online Universities.

6. Increased Food Production

Another reason why computer science is important is that it enables farmers all over the world to increase their food production. When you live on a planet with nearly 8 billion people, the food supply chain is vital. Not to mention experts believe the population will hit 9.7 billion by 2050.

Computers help farmers in quite a few ways. Farmers survey their land with satellites and apply water and nutrients autonomously. Computers also help increase greenhouse production drastically with less labor. Hopefully, similar systems of autonomous production will one day help us sustain life on Mars or other planets.

Computer Science does a lot of good for the world and that’s why it’s so important. If computers continue to advance at the rate they have been , humanity will reach a new era. However, it will never solve all of our problems for us. In fact, automation and Artificial General Intelligence may pose real threats to humanity in the future. However, I’m confident that humanity will prevail in the face of our obstacles.

What other choice do we have?

Tim Statler

Tim Statler is a Computer Science student at Governors State University and the creator of Comp Sci Central. He lives in Crete, IL with his wife, Stefanie, and their cats, Beyoncé and Monte. When he's not studying or writing for Comp Sci Central, he's probably just hanging out or making some delicious food.

Recent Posts

Programming Language Levels (Lowest to Highest)

When learning to code, one of the first things I was curious about was the difference in programming language levels. I recently did a deep dive into these different levels and put together this...

Is Python a High-Level Language?

Python is my favorite programming language so I wanted to know, "Is Python a High-Level Language?" I did a little bit of research to find out for myself and here is what I learned. Is Python a...

  • Department of Computer Science and Engineering >
  • People >
  • Faculty Directory >

Kelin Luo.

Research Topics

Theoretical computer science and operational research, including discrete optimization problems; online algorithms

Contact Information

306 Davis Hall

Buffalo NY, 14260-2500

Phone: (716) 645-1589

[email protected]

Related Links

Research areas.

A chess match.

New Research Shows Learning Is More Effective When Active

By Aaron Aupperlee aaupperlee(through)cmu.edu

  • School of Computer Science
  • aaupperlee(through)cmu.edu

— Related Content —

New intelligent science stations change maker spaces, new ai enables teachers to rapidly develop intelligent tutoring systems, revolutionizing education.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

ai-logo

Article Menu

importance of research in computer science

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Harnessing generative artificial intelligence for digital literacy innovation: a comparative study between early childhood education and computer science undergraduates.

importance of research in computer science

1. Introduction

2. review of recent literature, 3. materials and methods, 3.1. the present study.

  • RQ1—Do ECE undergraduates who utilize AI-generated platforms achieve higher academic performance in designing, developing, and implementing instructional design projects compared to their CS counterparts?
  • RQ2—Do ECE undergraduates who utilize AI-generated platforms have different user experiences (usefulness of AI tools, comfort level, challenges, and utilization) in their projects compared to their CS counterparts?
  • RQ3—Do ECE undergraduates exhibit higher levels of overall satisfaction using AI-powered instructional design projects compared to CS undergraduates?

3.2. Research Context

3.3. participants, 3.4. instructional design context.

  • A. Learning activities:
  • Research and compare features: Divide and assign each participant of the mentioned AI tools (Sudowrite, Jasper, ShortlyAI, Lumiere3D, Lumen5, Animaker AI). For this study’s purpose, we gave participants time to experiment with one or two of the tools and encouraged them to create examples of how these tools could be used for educational purposes to discuss their creations, focusing on the learning potential and potential challenges.
  • Identify curriculum topics: Brainstorm specific topics within ECE and CS that could benefit from AI-generated content, considering areas such as storytelling, coding basics, or creative expression.
  • Storyboard development: Divide participants into small groups, each assigned a chosen topic with a twofold purpose: (a) create a storyboard outlining how they would use AI tools to develop an engaging and educational learning experience on their chosen topic and (b) encourage them to consider factors like interactivity and assessments associated with learning objectives depending on their educational disciplines.
  • Presentation and peer feedback: Each group presents their storyboard, explaining their rationale and design choices to discuss the feasibility and effectiveness of each approach.
  • B. Learning projects:
  • Content creation: Participants can generate (video and image) presentations, and create artifacts designed to interact with learning subjects based on ECE and CS curricula using various AI platforms, which are described in the above subsection (see “Instructional design context”). These projects aim to explore how AI can improve video editing by automating tasks such as scene segmentation, color grading, and audio enhancement. This not only contributes to formal professional development by building new skills and knowledge, but also offers informal benefits by allowing participants to explore the potential of AI in this field.
  • Student motivation: The project area aligns with departmental interests, fostering collaboration and knowledge sharing beyond individual roles. This facilitates the creation of intra-departmental connections and the exchange of ideas.
  • Evaluating AI-generated content creation: This project area proposes investigating the current state of AI-powered content creation tools, including virtual avatars, video generation models, voices, and animations. This evaluation could assess the quality, effectiveness, and potential applications of these tools within educational settings, along with their potential impact on existing workflows.

3.5. Experimental Procedure

3.6. ethical considerations, 3.7. measuring tools.

  • Attractiveness: Measures the user’s overall impression of the product, whether they find it appealing or not.
  • Efficiency: Assesses how easy and quick it is to use the product and how well organized the interface is.
  • Perspicuity: Evaluates how easy it is to understand how to use the product and get comfortable with it.
  • Dependability: Focuses on users’ feelings of control during interaction, the product’s security, and whether it meets their expectations.
  • Stimulation: Assesses how interesting and enjoyable the product is to use and whether it motivates users to keep coming back.
  • Novelty: Evaluates how innovative and creative the product’s design is and how much it captures the user’s attention.

3.8. Data Collection and Analysis

3.9. data integrity and reliability, 4.1. analysis of academic performance, 4.2. analysis of students’ experience, 4.3. analysis of students’ satisfaction, 5. discussion, 6. conclusions.

  • Incorporating AI integration projects: Educational institutions should consider integrating AI projects into digital literacy courses to equip students with valuable technical and pedagogical skills. This research confirms the effectiveness of integrating AI tools in digital literacy training. Students, even those with limited background in technology, can successfully learn to design, develop, and utilize AI-generated content.
  • Provide guidance and support: Offering clear guidance and support throughout the project, especially during the initial stages, can motivate and engage students with varying levels of technical expertise. This study highlights the importance of considering students’ educational backgrounds and prior technological experience. Design activities that cater to these differences, for example, offer more scaffolding or support for ECE students compared to CS undergraduates.
  • Consider user experience and satisfaction: The differences in user experience and satisfaction between ECE and CS students provide insights into the contextual factors that influence the adoption and effectiveness of AI tools in education. These findings support the theoretical perspective that user experience and satisfaction are critical factors in the successful implementation of educational technologies. Future research should further explore these contextual factors to develop more nuanced theories on technology adoption in education.
  • Differentiated learning approaches: Modified learning approaches may be necessary based on students’ backgrounds and interests. While this study’s findings suggest that ECE undergraduates in our sample benefited from video development projects aligned with their future careers, and CS students from our sample were more engaged with animation development tasks, these observations are based on small-scale cohorts from a single context. Therefore, further research with larger and more diverse samples is needed to validate these findings and to explore their applicability to broader cohorts of ECE and CS undergraduates.
  • Tailored educational approaches: The differences in user experience and satisfaction between ECE and CS students highlight the need for differentiated learning approaches based on students’ backgrounds and interests. For instance, ECE students may benefit more from projects involving video development, which aligns with their future careers, while CS students might be more engaged with tasks related to animation development. Tailoring educational approaches to the specific needs of different student groups can enhance the effectiveness of AI integration in education.
  • Reevaluated assumptions about AI experience: Our findings highlight the need to reassess assumptions about AI experience based on academic discipline. While we initially assumed that ECE students would have less AI experience, the opposite was true in our sample. This suggests that AI experience may be more closely related to the practical applications of AI in different fields rather than the level of technical knowledge.

7. Limitations and Considerations for Future Research

  • Larger and more diverse samples to enhance the generalizability of the findings need to be implemented in future studies. Including participants from different institutions and backgrounds can provide a more comprehensive understanding of the impact of AI tools in education.
  • Longitudinal studies are needed to examine the long-term effects of AI integration on students’ learning outcomes, user experience, and satisfaction. Such studies can provide deeper insights into the sustained impact of AI tools on education.
  • Incorporating qualitative research methods, such as interviews and focus groups, can complement the quantitative findings and provide richer insights into students’ experiences with AI tools. Qualitative data can help uncover the nuances and contextual factors that influence the effectiveness of AI in education.
  • External validation of the measurement instruments to confirm that they accurately measure learning outcomes is also crucial. Future research should employ external assessments, such as exams or practical projects, to validate the findings and ensure the robustness of the evaluation methods.

Author Contributions

Institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Rospigliosi, P.A. Artificial intelligence in teaching and learning: What questions should we ask of ChatGPT? Interact. Learn. Environ. 2023 , 31 , 1–3. [ Google Scholar ] [ CrossRef ]
  • Luo (Jess), J. A critical review of GenAI policies in Higher Education Assessment: A call to reconsider the “originality” of students’ work. Assess. Eval. High. Educ. 2024 , 49 , 651–664. [ Google Scholar ] [ CrossRef ]
  • Lodge, J.M.; Yang, S.; Furze, L.; Dawson, P. It is not like a calculator, so what is the relationship between learners and Generative Artificial Intelligence? Learn. Res. Pract. 2023 , 9 , 117–124. [ Google Scholar ] [ CrossRef ]
  • Su, J. Development, and validation of an artificial intelligence literacy assessment for kindergarten children. Educ. Inf. Technol. 2024 . [ Google Scholar ] [ CrossRef ]
  • Pellas, N. The influence of sociodemographic factors on students’ attitudes toward AI-generated video content creation. Smart Learn. Environ. 2023 , 10 , 57. [ Google Scholar ] [ CrossRef ]
  • Jiang, Y.; Hao, J.; Fauss, M.; Li, C. Detecting ChatGPT-generated essays in a large-scale writing assessment: Is there a bias against non-native English speakers? Comput. Educ. 2024 , 217 , 105070. [ Google Scholar ] [ CrossRef ]
  • Su, J.; Yang, W. Artificial intelligence in early childhood education: A scoping review. Comput. Educ. Artif. Intell. 2022 , 3 , 100049. [ Google Scholar ] [ CrossRef ]
  • Adeshola, I.; Adepoju, A.P. The opportunities and challenges of ChatGPT in Education. Interact. Learn. Environ. 2023 , 1–14. [ Google Scholar ] [ CrossRef ]
  • Bhullar, P.S.; Joshi, M.; Chugh, R. ChatGPT in higher education: A synthesis of literature and a future research agenda. Educ. Inf. Technol. 2024 . [ Google Scholar ] [ CrossRef ]
  • Chai, C.S.; Lin, P.-Y.; Jong, M.S.-Y.; Dai, Y.; Chiu, T.K.F.; Qin, J. Perceptions of and behavioral intentions towards learning artificial intelligence in primary school students. J. Educ. Technol. Soc. 2021 , 24 , 89–101. [ Google Scholar ]
  • Mao, J.; Chen, B.; Liu, J.C. Generative artificial intelligence in education and its implications for assessment. TechTrends 2024 , 68 , 58–66. [ Google Scholar ] [ CrossRef ]
  • Pellas, N. The effects of generative AI platforms on undergraduates’ narrative intelligence and writing self-efficacy. Educ. Sci. 2023 , 13 , 1155. [ Google Scholar ] [ CrossRef ]
  • Su, J.; Yang, W. Artificial Intelligence (AI) literacy in early childhood education: An intervention study in Hong Kong. Interact. Learn. Environ. 2023 , 1–15. [ Google Scholar ] [ CrossRef ]
  • Al Naqbi, H.; Bahroun, Z.; Ahmed, V. Enhancing work productivity through generative artificial intelligence: A comprehensive literature review. Sustainability 2024 , 16 , 1166. [ Google Scholar ] [ CrossRef ]
  • Chiu, T.K. The impact of Generative AI (genai) on practices, policies, and research direction in education: A case of ChatGPT and Midjourney. Interact. Learn. Environ. 2023 , 1–17. [ Google Scholar ] [ CrossRef ]
  • Wang, B.; Rau, P.L.P.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2022 , 42 , 1324–1337. [ Google Scholar ] [ CrossRef ]
  • Saqr, R.R.; Al-Somali, S.A.; Sarhan, M.Y. Exploring the acceptance and user satisfaction of AI-driven e-learning platforms (Blackboard, Moodle, Edmodo, Coursera and EDX): An integrated technology model. Sustainability 2023 , 16 , 204. [ Google Scholar ] [ CrossRef ]
  • Dekker, I.; De Jong, E.M.; Schippers, M.C.; De Bruijn-Smolders, M.; Alexiou, A.; Giesbers, B. Optimizing students’ mental health and academic performance: AI-enhanced life crafting. Front. Psychol. 2020 , 11 , 1063. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jiao, P.; Ouyang, F.; Zhang, Q.; Alavi, A.H. Artificial intelligence-enabled prediction model of student academic performance in online engineering education. Artif. Intell. Rev. 2022 , 55 , 6321–6344. [ Google Scholar ] [ CrossRef ]
  • Wang, S.; Sun, Z.; Chen, Y. Effects of higher education institutes’ artificial intelligence capability on students’ self-efficacy, creativity and learning performance. Educ. Inf. Technol. 2022 , 28 , 4919–4939. [ Google Scholar ] [ CrossRef ]
  • Xia, Q.; Chiu, T.K.F.; Chai, C.S.; Xie, K. The mediating effects of needs satisfaction on the relationships between prior knowledge and self-regulated learning through artificial intelligence chatbot. Br. J. Educ. Technol. 2023 , 54 , 967–986. [ Google Scholar ] [ CrossRef ]
  • Maurya, L.S.; Hussain, M.S.; Singh, S. Developing classifiers through machine learning algorithms for student placement prediction based on academic performance. Appl. Artif. Intell. 2021 , 35 , 403–420. [ Google Scholar ] [ CrossRef ]
  • Padmasiri, P.; Kalutharage, P.; Jayawardhane, N.; Wickramarathne, J. AI-Driven User Experience Design: Exploring Innovations and challenges in delivering tailored user experiences. In Proceedings of the 8th International Conference on Information Technology Research (ICITR), Colombo, Sri Lanka, 7–8 December 2023. [ Google Scholar ] [ CrossRef ]
  • Yang, B.; Wei, L.; Pu, Z. Measuring and improving user experience through Artificial Intelligence-aided design. Front. Psychol. 2020 , 11 , 595374. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sunitha, B.K. The impact of AI on human roles in the user interface & user experience design industry. Int. J. Sci. Res. Eng. Manag. 2024 , 8 , 1–5. [ Google Scholar ]
  • Fakhri, M.M.; Ahmar, A.S.; Isma, A.; Rosidah, R.; Fadhilatunisa, D. Exploring generative AI tools frequency: Impacts on attitude, satisfaction, and competency in achieving higher education learning goals. EduLine J. Educ. Learn. Innov. 2024 , 4 , 196–208. [ Google Scholar ] [ CrossRef ]
  • Campbell, D.T.; Stanley, J.C. Experimental and Quasi-Experimental Designs for Research on Teaching. In Handbook of Research on Teaching ; Gage, N.L., Ed.; Rand McNally: Chicago, IL, USA, 1963. [ Google Scholar ]
  • Murchan, D.; Siddiq, F. A call to action: A systematic review of ethical and regulatory issues in using process data in educational assessment. Large-Scale Assess. Educ. 2021 , 9 , 25–38. [ Google Scholar ] [ CrossRef ]
  • Barchard, K.A.; Pace, L.A. Preventing human error: The impact of data entry methods on data accuracy and statistical results. Comput. Hum. Behav. 2011 , 27 , 1834–1839. [ Google Scholar ] [ CrossRef ]
  • Law, E.L.; Van Schaik, P.; Roto, V. Attitudes towards user experience (UX) measurement. Int. J. Hum. Comput. Stud. 2014 , 72 , 526–541. [ Google Scholar ] [ CrossRef ]
  • Cortina, J.M. What is the coefficient alpha? An examination of theory and applications. J. Appl. Psychol. 1993 , 78 , 98–104. [ Google Scholar ] [ CrossRef ]
  • Wei, H.C.; Chou, C. Online learning performance and satisfaction: Do perceptions and readiness matter? Distance Educ. 2020 , 41 , 48–69. [ Google Scholar ] [ CrossRef ]
  • Brislin, R.W. Back-translation for cross-cultural research. J. Cross-Cult. Psychol. 1970 , 1 , 185–216. [ Google Scholar ] [ CrossRef ]
  • Bridgeman, B.; Cline, F. Effects of differentially time-consuming tests on computer-adaptive test scores. J. Educ. Meas. 2004 , 41 , 137–148. [ Google Scholar ] [ CrossRef ]
  • Marsden, C.J.T. Single group, pre- and posttest research designs: Some methodological concerns. Oxf. Rev. Educ. 2012 , 38 , 583–616. [ Google Scholar ] [ CrossRef ]
  • Hughes, D.; Percy, C.; Tolond, C. LLMs for HE Careers Provision ; Jisc (Prospects Luminate): Bristol, UK, 2023; Available online: https://luminate.prospects.ac.uk/large-language-models-for-careers-provision-in-higher-education (accessed on 1 May 2024).
  • The Guardian. Eating Disorder Hotline Union AI Chatbot Harm. 2023. Available online: https://www.theguardian.com/technology/2023/may/31/eating-disorder-hotline-union-ai-chatbot-harm (accessed on 1 May 2024).

Click here to enlarge figure

ECE Students (n = 32)CS Students (n = 34)
MSDMSDt-Test
Age22.787.90619.881.2252.11 *
Experience AI Images0.560.5040.210.4103.16 **
Experience AI Videos0.530.5070.440.5610.68
Familiarity with generative AI3.880.4213.151.0483.75 **
AI is crucial for enhancing learning effectiveness 4.290.5883.681.0072.96 **
Levene’s Test for Equality of Variances
FSig.tdfSig.
(2-Tailed)
Academic PerformanceEqual variances assumed0.5400.465−0.218640.828
Levene’s Test for Equality of Variancest-Test for Equality of Means
FSig.tdfSig.
(2-Tailed)
Mean DifferenceStd. Error Difference
UsefulnessEqual variances assumed22.72702.928640.0050.5909930.201857
Equal variances not assumed 2.9945.5170.0040.5909930.197649
Comfort LevelEqual variances assumed0.7320.3960.247640.8060.0395220.160223
Equal variances not assumed 0.24863.4590.8050.0395220.159469
User Equal variances assumed4.6750.0341.665640.1010.2303920.138402
Equal variances not assumed 1.68159.9680.0980.2303920.137054
Levene’s Test for Equality of Variancest-Test for Equality of Means
FSig.tdfSig.
(2-Tailed)
Mean DifferenceStd. Error Difference
Satisfaction MeanEqual variances assumed0.6740.4151.189640.2390.15740.1323
Equal variances not assumed 1.1963.9220.2380.15740.1322
Satisfaction ImagesEqual variances assumed1.5650.2161.206640.2320.19490.1615
Equal variances not assumed 1.20963.9950.2310.19490.1612
Satisfaction VideosEqual variances assumed0.020.8891.226640.2250.18570.1514
Equal variances not assumed 1.22663.7570.2250.18570.1514
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Kazanidis, I.; Pellas, N. Harnessing Generative Artificial Intelligence for Digital Literacy Innovation: A Comparative Study between Early Childhood Education and Computer Science Undergraduates. AI 2024 , 5 , 1427-1445. https://doi.org/10.3390/ai5030068

Kazanidis I, Pellas N. Harnessing Generative Artificial Intelligence for Digital Literacy Innovation: A Comparative Study between Early Childhood Education and Computer Science Undergraduates. AI . 2024; 5(3):1427-1445. https://doi.org/10.3390/ai5030068

Kazanidis, Ioannis, and Nikolaos Pellas. 2024. "Harnessing Generative Artificial Intelligence for Digital Literacy Innovation: A Comparative Study between Early Childhood Education and Computer Science Undergraduates" AI 5, no. 3: 1427-1445. https://doi.org/10.3390/ai5030068

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Grab your spot at the free arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Digital Libraries

Title: evaluating research quality with large language models: an analysis of chatgpt's effectiveness with different settings and inputs.

Abstract: Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises, appointments and promotion. It is therefore important to investigate whether Large Language Models (LLMs) can play a role in this process. This article assesses which ChatGPT inputs (full text without tables, figures and references; title and abstract; title only) produce better quality score estimates, and the extent to which scores are affected by ChatGPT models and system prompts. The results show that the optimal input is the article title and abstract, with average ChatGPT scores based on these (30 iterations on a dataset of 51 papers) correlating at 0.67 with human scores, the highest ever reported. ChatGPT 4o is slightly better than 3.5-turbo (0.66), and 4o-mini (0.66). The results suggest that article full texts might confuse LLM research quality evaluations, even though complex system instructions for the task are more effective than simple ones. Thus, whilst abstracts contain insufficient information for a thorough assessment of rigour, they may contain strong pointers about originality and significance. Finally, linear regression can be used to convert the model scores into the human scale scores, which is 31% more accurate than guessing.
Subjects: Digital Libraries (cs.DL); Artificial Intelligence (cs.AI)
Cite as: [cs.DL]
  (or [cs.DL] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

RIKEN Center for Computational Science

The 267th R-CCS Cafe (Aug 23, 2024)

Details
Date Fri, Aug 23, 2024
Time 3:00 pm - 5:00 pm (3:00 pm - 4:00 pm Talks, 4:05 pm - 4:20 Discussions, 4:20 pm - Free discussion and coffee break)
City Kobe, Japan/Online
Place

Lecture Hall (6th floor) at R-CCS, Online seminar on Zoom

Language Presentation Language: English
Presentation Material: English
Speakers

Talk Titles and Abstracts

1st speaker: tingting wang.

Title: Exploring the Continuous Conformational Variability of Glutamate Dehydrogenase Using Cryo-EM Single-particle Images and MD Simulations Abstract: Glutamate dehydrogenase (GDH) is involved in the metabolism of the glutamate amino acid and catalyzes the reversible conversion of glutamate to α-ketoglutarate. The GDH derived from Thermococcus profundus, as a homohexamer, exhibits remarkable structural flexibility by a spontaneous motion of the NAD-binding domain to the core domain to control its open/closed states. The high flexibility permits different conformational states in its unliganded state. In this study, its conformational variability was investigated by employing a novel integration of large-scale cryo-electron microscopy (cryo-EM) experiments and the molecular dynamics simulation method. Utilizing cryo-EM single-particle images, a 3D-to-2D flexible fitting method enhanced by iterative conformational-landscape refinement through MDSPACE was employed to obtain an atomic-scale conformational model for each particle. With the conformations pool of GDH, continuous conformational variability from cryo-EM single particle images can be considered and reconstructed without presupposing the number of discrete states, contributing to detail the insights into the domain motions of GDH and deepen the understanding of glutamate metabolism.

2nd Speaker: Han Xu

Title: Preparing Gutzwiller wave function for attractive SU(3) fermions on a quantum computer Abstract: We implement the Gutzwiller wave function for attractive SU(3) fermion systems on a quantum computer using a quantum-classical hybrid scheme based on the discrete Hubbard-Stratonovich transformation. In this scheme, we express the nonunitary Gutzwiller operator as a linear combination of unitaries involving two-qubit fermionic Givens rotation gates. The fermionic Givens rotations are employed to the register qubits encoding different fermion colors on each site. Two complementary approaches are reformulated to perform the sum over the auxiliary fields. The first approach probabilistically prepares the Gutzwiller wave function on the register qubits, but the measurement on ancilla qubits is necessary. Because of its probabilistic outcomes, the success rate is studied as a function of the variational parameter on different lattices when the Fermi-sea trial state and the BCS-like trial state are considered. The second approach utilizes the importance sampling technique to solve the Gutzwiller variational problem where the expectation values of observables are the central objectives. The proposed scheme is examined by calculating the energies and triple occupancy of the attractive SU(3) Hubbard model within the context of the digital quantum simulation. Moreover, experiments on a trapped-ion based quantum computer are carried out for the two-site attractive SU(3) Hubbard model, where the raw data are in good agreement with the exact results within the statistical errors.

3rd Speaker: Yangyang Zhang

Title: Coarse-grained Molecular Modeling of Substrate Inhibition in Enzyme Catalysis Abstract: Substrate inhibition occurs when enzyme activity decreases despite increasing substrates. By performing molecular dynamics (MD) simulations with a coarse-grained dynamic energy landscape model, this study revealed the mechanism of substrate inhibition of adenylate kinase (AdK) by AMPs: excess AMPs inhibit the enzymatic cycle by suppressing an energetically frustrated but kinetically accelerating pathway. Further investigations showed tight interplay between enzyme conformational equilibria and substrate inhibition, with mutations favoring closed conformations enhancing inhibition. These findings suggest that the above mechanism may apply to other multi-substrate enzymes where product release is the bottleneck step.

Important Notes

  • Please turn off your video and microphone when you join the meeting.
  • The broadcasting may be interrupted or terminated depending on the network condition or any other unexpected event.
  • The program schedule and contents may be modified without prior notice.
  • Depending on the utilized device and network environment, it may not be able to watch the session.
  • All rights concerning the broadcasted material will belong to the organizer and the presenters, and it is prohibited to copy, modify, or redistribute the total or a part of the broadcasted material without the previous permission of RIKEN.

(Aug 14, 2024)

Privacy Overview

  • Events at UC Santa Cruz
  • Friday, August 23

Li, X. (CSE) - Towards Scalable and Efficient Multimodal Learning

Friday, August 23, 2024 10:30am

  • Share Li, X. (CSE) - Towards Scalable and Efficient Multimodal Learning on Facebook
  • Share Li, X. (CSE) - Towards Scalable and Efficient Multimodal Learning on Twitter
  • Share Li, X. (CSE) - Towards Scalable and Efficient Multimodal Learning on LinkedIn

importance of research in computer science

About this Event

Foundation multimodal models have demonstrated remarkable capabilities, such as reasoning and human-level understanding, largely driven by ever-growing model sizes and data scales. However, the associated training costs increase exponentially. For example, replicating one of the pioneering multimodal models, CLIP, demands hundreds or even thousands of advanced GPUs, making it difficult for researchers to reproduce or analyze the underlying phenomena. To address this challenge, our research focuses on developing more efficient multimodal models and training algorithms, emphasizing architecture design and scalable training efficiency. Our findings include 1) revisiting the importance of image pre-training in video understanding by introducing a novel spatial-temporal separable convolution method, which leverages image priors more effectively; 2) discovering an inverse scaling law in CLIP training, where larger image/text encoders allow for training with fewer tokens, enabling the use of limited computational resources. We also plan to investigate the role of data quality and quantity in the multimodal domain, which we believe can lead to further efficiency improvements and innovative paradigms.

Event Host: Xianhang Li, Ph.D. Student, Computer Science & Engineering

Advisor: Cihang Xie

Event Details

Invited Audience

See Who Is Interested

Annajiat Alim Rasel

1 person is interested in this event

Dial-In Information

Zoom Link :  https://ucsc.zoom.us/j/99442815930?pwd=j4WuMLFBRbbLj4J14pM1xJ6X59zTvZ.1

Meeting ID: 994 4281 5930

Passcode:  329159

User Activity

No recent activity

COMMENTS

  1. Research & Impact

    Stanford Computer Science faculty members work on the world's most pressing problems, in conjunction with other leaders across multiple fields. Fueled by academic and industry cross-collaborations, they form a network and culture of innovation.

  2. Research in Computer Science

    Semiha Ergan, an affiliate professor of the Computer Science and Engineering Department, is responsible for a project that performs data analysis on highly sensed buildings for understanding patterns in building performance. The data deals with HVAC systems and energy use in such buildings to assist in building management strategies.

  3. Research in computer science: an empirical study

    Our objective in this study is to provide a detailed characterization of computer science research, along the dimensions identified above, by examining articles published in major computer science journals from 1995-1999. Our interest in this study goes beyond topic and research methods and includes other ways of characterizing research such ...

  4. What is Research in Computing Science?

    Similarly, research in requirements engineering and human computer interaction has challenged the proponents of formal methods. These tensions stem from the fact that `Computing Science' is a misnoma. Topics that are currently considered part of the discipline of computing science are technology rather than theory driven.

  5. Research

    Research. The computing and information revolution is transforming society. Cornell Computer Science is a leader in this transformation, producing cutting-edge research in many important areas. The excellence of Cornell faculty and students, and their drive to discover and collaborate, ensure our leadership will continue to grow.

  6. Computer science

    Computer science articles from across Nature Portfolio. Computer science is the study and development of the protocols required for automated processing and manipulation of data. This includes ...

  7. Research in computer science: an empirical study

    Our objective in this study is to provide a detailed characterization of computer science research, along the dimensions identified above, by examining articles published in major computer science journals from 1995-1999. Our interest in this study goes beyond topic and research methods and includes other ways of characterizing research such ...

  8. Writing for Computer Science

    Reviews "This is a comprehensive guide on research methods and how to produce a scientific publication detailing one's research in computer science … . a must-read for those doing research in CS and related fields. It will greatly benefit anyone who is involved in any kind of scientific research, as the examples are only from the CS field.

  9. How to do research in computer science

    Criteria for funding research in Canada: the most important source of funding for university-based research in computer science is the Natural Sciences and Engineering Research Council (NSERC)

  10. Introduction to Research in Computer Science

    Defining a CS research problem, finding and reading technical papers, oral communication, technical writing, and independent learning. Course participants work in teams as they apprentice with a CS research group to propose an original research problem and write a research proposal. UCSB Computer Science. 2104 Harold Frank Hall.

  11. The Use of Information Technology in Research

    Read chapter The Use of Information Technology in Research: Computers and telecommunications have revolutionized the processes of scientific research. How...

  12. (PDF) Research Methods in Computer Science

    Researchers, in the field of computer science and engineering, may view the research process in a. way depicted by Figure 1. There is an experimenter in a middle of the research field trying to ...

  13. New and Future Computer Science and Technology Trends

    Computer science is constantly evolving. Learn more about the latest trends in AI, cybersecurity, regenerative agritech, and other developing areas of the field.

  14. Application of Computer in Research

    There are various computer applications used in scientific research. Some of the most important applications used in scientific research are data storage, data analysis, scientific simulations, instrumentation control and knowledge sharing. Data Storage. Experimentation is the basis of scientific research.

  15. Why is Computer Science So Important?

    Computer science is the process of solving complex organizational problems using technical solutions. The reason this is such an important field is that computers and technology have been integrated into virtually every economic sector, industry, and even organization operating in the modern economy. Professionals working in computer science ...

  16. Research in Computer Science Education

    Computer science education research refers to students' difficulties, misconceptions, and cognitive abilities, activities that can be integrated in the learning process, usage of visualization and animations tools, the computer science teachers' role, difficulties and professional development, and many more topics.

  17. 500+ Computer Science Research Topics

    n this post, we will delve into some of the most interesting and important research topics in Computer Science. From the latest advancements..

  18. PDF Microsoft Word

    The Importance of Computing Education Research. Steve Cooper, Jeff Forbes, Armando Fox, Susanne Hambrusch, Andrew Ko, and Beth Simon. Version 2. January 14, 2016. 1. Introduction. Interest in computer science is growing. As a result, computer science (CS) and related departments are experiencing an explosive increase in undergraduate ...

  19. Areas of Research

    The Department of Computer Science and Engineering of Wright State University recently received a grant, titled "REU Site: Cybersecurity Research at Wright State University", from the National Science Foundation.

  20. Why students need Computer Science to succeed

    Computer science skills are critical to succeed in today's economy, but too many students - especially those from diverse backgrounds and experiences - are excluded from computer science. That's why we've created a new resource guide which we hope will help teachers build inclusive computer science education programs."

  21. Importance and application of operational research in computer ...

    As for what operational research stands for, the use of operational research in computer science is the systematic approach to deal with a problem and find optimized and effective solutions.

  22. Why Is Computer Science Important?

    Why Is Computer Science Important? Computer Science is important for many reasons. However, the positive impact that computers have had on the world is the main reason that Computer Science is so important. Because of computers, we've been able to improve communication, transportation, healthcare, education, food production, increased our overall standard of living, and contributed to many ...

  23. "That's important, but...": How Computer Science Researchers Anticipate

    ABSTRACT Computer science research has led to many breakthrough innova-tions but has also been scrutinized for enabling technology that has negative, unintended consequences for society. Given the increas-ing discussions of ethics in the news and among researchers, we interviewed 20 researchers in various CS sub-disciplines to identify whether and how they consider potential unintended ...

  24. Luo, Kelin

    Research Topics. Theoretical computer science and operational research, including discrete optimization problems; online algorithms ... Google Scholar Profile. Biography Research. Research Areas . Algorithms and Complexity. 8/13/24. Computer science theory assesses which problems are possible and feasible to solve through theories of ...

  25. New Research Shows Learning Is More Effective When Active

    Engaging students through interactive activities, discussions, feedback and AI-enhanced technologies resulted in improved academic performance compared to traditional lectures, lessons or readings, faculty from Carnegie Mellon University's Human-Computer Interaction Institute concluded after collecting research into active learning. The research also found that effective active learning ...

  26. AI

    Conversely, Computer Science students reported a slightly higher comfort level with these tools. In terms of overall satisfaction, Early Childhood Education students expressed greater satisfaction with AI software than their counterparts, acknowledging its importance for their future careers. ... or important in the respective research area ...

  27. Evaluating Research Quality with Large Language Models: An Analysis of

    Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises, appointments and promotion. It is therefore important to investigate whether Large Language Models (LLMs) can play a role in this process. This article assesses which ChatGPT inputs (full text without tables, figures and references; title and abstract; title ...

  28. The 267th R-CCS Cafe (Aug 23, 2024)

    The second approach utilizes the importance sampling technique to solve the Gutzwiller variational problem where the expectation values of observables are the central objectives. The proposed scheme is examined by calculating the energies and triple occupancy of the attractive SU(3) Hubbard model within the context of the digital quantum ...

  29. Li, X. (CSE)

    Foundation multimodal models have demonstrated remarkable capabilities, such as reasoning and human-level understanding, largely driven by ever-growing model sizes and data scales. However, the associated training costs increase exponentially. For example, replicating one of the pioneering multimodal models, CLIP, demands hundreds or even thousands of advanced GPUs, making it difficult for ...