Education Technology Roadmap: Technology Vision 2035

  • November 2017
  • Edition: https://drive.google.com/open?id=1Gm1MCO0ooXuAI9oO80aTfgL-JylKDGFO
  • Publisher: TIFAC, Government of India

Geetha Venkataraman at Ambedkar University Delhi

  • Ambedkar University Delhi
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • M. Lazerson

Smita Dayal

  • Stevens, G., Jr
  • Ivan Illich

David Edgerton

  • Mike Fleming
  • Michael Gibbons
  • Bjørn Wittrock
  • Camille Limoges
  • Helga Nowotny
  • Simon Schwartzman
  • Peter Scott
  • Martin Trow
  • Jaya Jaitly
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Screen Reader Access

TIFAC

TECHNOLOGY THINK TANK FOR GOVERNMENT OF INDIA

TIFAC is a unique knowledge network Institution with a variety of activities leading to knowledge generation for the country in terms of preparing various kinds of reports viz. technology foresight, technology development, preparing technology linked business opportunity reports and implementing mission-mode programmes. Its mandate is to assess the state-of-art of technology and set directions for future technological development in India in important socio-economic sectors. TIFAC plays a crucial role towards flagging, alerting and planning Government response to fast changing technological scenario and rapidly evolving geo political relations and their impact on Indian economy, and more importantly integrating with other ministries and giving a holistic picture as opposed to standalone views. TIFAC exhibits ubiquity in its offered services and has been pioneering in mapping future technology trajectories for the nation; came up with a variety of foresight reports comprising short, medium and long term time scale, Techno Market Survey reports, Technology Vision documents (TV 2020, TV 2035) and techno-economic feasibility reports on emerging technologies. TIFAC is involved in demonstration of some unique models of innovation support, technology development, and assessment.

essay about technology vision document 2035

NEW PROGRAMMES UNVEILED during

Tifac foundation day 2021.

essay about technology vision document 2035

essay about technology vision document 2035

Application forms for Autumn Internship 2024

Click here to pay internship fees, tifac internship scheme guidelines for autumn internship 2024, important links, walk-in-interview for engagement of project staff, request for quotation for designing, scanning, printing & binding of tifac  annual report 2023-24 (both in english and hindi versions), participate in consumer survey on emerging vehicle technologies, walk-in-interview in hybrid mode on 18th july, 2024 from 11.00 am to 1:30 pm in tifac for engagement of web developer (website design and management), register in tifac network of experts in science and technology (tifac-nest), list of shortlisted candidates under wise-internship in ipr (erstwhile wos-c, kiran ipr) for the examination to be held on august 20, 2023., exam for wise-internship in ipr (erstwhile wos-c, kiran ipr) will be held on august 20, 2023., international yoga day, call for applications for 1year training in intellectual property rights (ipr) under wise-internship in ipr (erstwhile wos-c, kiran ipr).

Click here to Apply

Click here for detailed Advertisement

essay about technology vision document 2035

WALK-IN-INTERVIEW – “Consultants/ Experts” -->

Tifac in media.

Inauguration of TIFAC Tele Digital Health Pilot Programme by Dr. Jitendra Singh, Hon'ble MoS S&T and ES on 31 Dec 2021.

essay about technology vision document 2035

Latest Reports

essay about technology vision document 2035

Result of Walk-in-Interview held on 14.08.24 at TIFAC for Engagement of Consultant (Accounts) in TIFAC

Result of walk – in - interview held on 09.04.2024 at tifac for engagement of project staff on contract under dst sponsored project “technology needs assessment (tna)”, result of walk-in interview held on march 19, 2024 for engagement of senior web developer and web developer (website design and management), walk-in-interview for engagement of project staff on 29.8.2024 from 10.30 am to 1.00 pm in tifac, walk-in-interview for engagement of consultant (accounts) in tifac, request for quotation for designing, scanning, printing & binding of tifac  annual report 2023-24 (both in english and hindi versions), tender-cum- auction notice, proposals are invited from agencies / firms / start-ups / companies / institutes/organizations for empanelment and assisting tifac and making logistical arrangements for technical review, assessment and evaluation of programs/schemes.

TIFAC extends Heartiest Congratulations to @DrJitendraSingh ji on once again taking over the charge as Minister of State (I/C) of Science and Technology, Ministry of Science & Technology. @drpradeep19 @IndiaDST pic.twitter.com/2CF5j8snO3 — TIFAC (@TIFAC4) June 13, 2024
Each and Every innovation is a small step taken towards the growth of the country. #TIFAC wishes all a happy National Technology Day #Nationaltechnologyday #Technologyispower @IndiaDST @drpradeep19 pic.twitter.com/VYuKpNfHA9 — TIFAC (@TIFAC4) May 11, 2024

Useful Links

  • Department of Science and Technology
  • TIFAC Developed BHUVAN-JAIVOORJA with NRSC
  • Ministry of Health and Family Welfare, Government of India
  • India Investment Grid

'http://india.gov.in, the National Portal of India'

Technology Information Forecasting and Assessment Council, AI Block, Technology Bhavan, 5 th floor, New Mehrauli Road, New Delhi 110016, India.

-->
 +91-11-26511248
 ed[at]tifac[dot]org[dot]in
Working Hours: 09:00AM-05:30PM

essay about technology vision document 2035

Technology Vision Document 2035 – Science & Technology Notes

Neha Grover

Aug 9, 2024

IAS Exam Latest Updates

  • 09 August, 2024 : UPSC Mains Schedule 2024 Out; Exam from September 20

The Technology Information, Forecasting and Assessment Council (TIFAC), an autonomous body under the Government of India's Department of Science and Technology (DST), published India's "Technology Vision 2035" in early 2016. It is an account of what we can (and should) be as a people and a country in 2035. TV 2035 claims to be inspired by the "collective aspirations of Indians, the ambitions of our youth, and the likely expectations of Indians in 2035 as the country grows." In this article, we will discuss regarding the Technology Vision 2035 which will be helpful for UPSC exam preparation.

Table of Contents

  • Technology Vision Document 2035

Technology Vision Document 2035 – Prerogatives

Technology vision document 2035 – essential prerequisites, technology vision document 2035 – grand challenges, technology vision document 2035 – capabilities & constraints, technology vision document 2035 – comprehensive national power.

Technology Vision 2035

What is Technology Vision Document 2035?

  • The document presents a vision of Indian citizens' needs in 2035, as well as how technology can help bring this vision to fruition.
  • People are said to be as important as technology in TV 2035: "It considers India's technological 'peoplescape' to be as important as its technological landscape."
  • TV 2035 recognises that there is no India without Indians.
  • This document is divided into six sections to explore the technological dimensions of this vision.

The document also identifies 12 technology development sectors - Education, Medical Sciences and Healthcare, Food and Agriculture, Water, Energy, Environment, Habitat, Transportation, Infrastructure, Manufacturing, Materials, and Information and Communication Technology.

Section Description
First Section
Second Section It outlines twelve 'prerogatives' that all Indians will have in 2035:
Third Section
Fourth Section - Technology Leadership - Technology Independence - Technology Innovation - Technology Adoption - Technology Dependence - Technology Constraints
Fifth Section
Sixth Section
Other Relevant Links
  • Twelve prerogatives should be available to every Indian, and ensuring their attainment is at the heart of our technology vision for India.
  • Nonetheless, our people's inherent and enduring diversity would have to be factored into our policies in order to meet these prerogatives.
  • To begin, different targets would need to be set for different population segments: while clean air is a greater challenge in our cities, potable water is a greater challenge in our rural areas.
  • Second, technology delivery mechanisms would differ for different population segments: for example, ensuring water availability in our arid zones would present a number of challenges in terms of both technology creation and delivery.
  • Finally, success in meeting our technology goals would have different social consequences depending on the characteristics of the population segment: connectivity, for example, would be far more enabling in our countryside than in our metropolitan areas.

12 Prerogatives

  • technology that already exists and is thus ready for deployment,
  • technology in pilot scale that must be scaled up in order to move from lab to land,
  • technology in the R&D stage that requires additional targeted research, and
  • technology that is still in the imagination and could emerge as a result of curiosity driven, paradigm shattering research.
  • The vision document also mentions the three critical 'transversal' technologies - materials, manufacturing, and information and communication technology (ICT) - that serve as the foundation for all other technologies.
  • Furthermore, this will necessitate a strong supporting infrastructure.
  • Technology development cannot occur in isolation; it requires an enabling ecosystem and a teamwork culture. It must also heavily rely on cutting-edge fundamental research.
  • Many of the challenges encountered during this technology vision exercise could be easily related to our country's developmental needs.
  • We attempted to connect technology to the real needs of the people and the country by identifying twelve prerogatives.
  • However, it is important to recognise that some goals go beyond the prerogatives in the sense that they address enormous challenges, would be realised only in the medium term, and would necessitate concerted and herculean efforts.
  • The Grand Challenges identified here share a few characteristics. They all have a single overarching goal with multiple specific targets.
  • If the Grand Challenges are met, they will have a multiplier effect, resulting in positive spinoffs, virtuous cycles, and feed into a variety of sectors.
  • Guaranteeing nutritional Security and eliminating female and child anaemia
  • Ensuring quantity and quality of water in all rivers and aquatic bodies
  • Securing critical resources Commensurate with the size of our country
  • Providing learner centric, language neutral and Holistic education to all
  • Understanding national Climate patterns and adapting to them
  • Making India Non-fossil fuel based
  • Taking the railway to Leh and Tawang
  • Ensuring location and ability Independent electoral and financial empowerment
  • Developing commercially Viable decentralised and Distributed energy for all
  • Ensuring universal ecofriendly Waste management
  • There has also been a heated debate about the social impact of technology and the trade-off between capital and labour.
  • Capital-intensive technology has been projected as detrimental to the use of 'Manpower,' particularly in India with abundant human resources, because it is argued that it would reduce jobs.
  • The Vision Document seeks to dispel this myth by arguing in favour of prudent policy and deliberate planning in the use of technology to teach new skills to workers and meet societal needs.
  • It depicts technology as a great equaliser rather than a source of social stratification.
  • To address these challenges, the Vision Document 2035 envisions a rational assessment of the Indian Technological Landscape's capabilities and constraints.
  • Technology leadership refers to those niche technologies in which we have core competencies, trained and skilled manpower, supportive infrastructure, an intellectual environment, and a traditional knowledge base, and can thus seek to assume a leadership role.
  • Technology independence refers to technologies that we would be forced to develop on our own because they are critical and simply would not be available elsewhere.
  • Technology innovation entails connecting disparate technologies or applying a breakthrough in one technology to another.
  • Technology adoption entails acquiring technologies from other sources, either by purchasing them or through a collaborative approach, and then modifying them to meet our needs, reducing our permanent reliance on other sources.
  • Technology dependence : those technological areas in which our country would remain dependent for reasons of infancy (technologies in which India is at the infancy stage and is likely to fall behind the curve in the long run), insignificance (technologies that are unlikely to have a significant impact on India's growth trajectory in the next 20 years), or redundancy (technologies that could easily be purchased from elsewhere and whose costs are low).
  • Technology constraints refer to areas where technology is threatening and problematic, either due to its negative environmental or social impact or due to serious legal and ethical issues.
  • In a separate section of the Vision Document, a 'Call to Action' is issued to all key stakeholders. It highlights the importance of long-term sustainability of India's technological prowess.
  • Technical Education Institutions conduct large-scale advanced research that leads to game-changing innovations.
  • The government increases its financial support from 1% to the long-planned 2% of GDP.
  • In the core research sector, the number of full-time equivalent Scientists should increase.
  • Participation and investment by the private sector in emerging technologies that are easily deployable and transferable from lab to field, thereby increasing efficiency in terms of technology and economic returns.
  • The connection between academia, intelligence, and industry is established through idea exchange, innovative curriculum design based on industry needs, industry-sponsored student internships, and research fellowships, among other things.
  • Creating a Research Ecosystem to translate research into a technology product/process by bringing together students, researchers, and entrepreneurs.
  • The first is the creation of knowledge. It asserts that India cannot afford not to be at the forefront of the applied or pure knowledge revolution.
  • The second activity is ecosystem design for innovation and development. Intriguingly, the document again states that the primary responsibility for ecosystem design must necessarily rest with government authorities.
  • A third key activity mentioned is technology deployment, which involves launching specific national missions with specific targets, defined timelines, and only a few carefully defined identified players.
Other Relevant Links
New Initiatives Aligned with the National Agenda

While this vision document looks to the future while considering the entire country, the technology roadmap for each sector would provide details outlining future technology trends, R&D directives, research points, anticipated challenges, and policy imperatives for each sector.

Question: What is Technology Vision Document 2035?

The Technology Information, Forecasting and Assessment Council (TIFAC), an autonomous body under the Government of India's Department of Science and Technology (DST), published India's "Technology Vision 2035" in early 2016. It is an account of what we can (and should) be as a people and a country in 2035. TV 2035 claims to be inspired by the "collective aspirations of Indians, the ambitions of our youth, and the likely expectations of Indians in 2035 as the country grows."

Question: What are different categories of Technologies under TV 2035?

From an Indian perspective, it divides technologies into six categories.

  • Technology Leadership
  • Technology Independence
  • Technology Innovation
  • Technology Adoption
  • Technology Dependence
  • Technology Constraints

Question: What are the 12 Technology Development Sectors identified in TV 2035?

Question: In rural road construction, the use of which of the following is preferred for ensuring environmental sustainability or to reduce carbon footprint? (UPSC 2020)

1) Copper slag

2) Cold mix asphalt technology

3) Geotextiles

4) Hot mix asphalt technology

5) Portland cement

Select the correct answer using the code given below:

(a) 1, 2 and 3 only

(b) 2, 3 and 4 only

(c) 4 and 5 only

(d) 1 and 5 only

Answer: (a) See the Explanation

Coir is a type of natural fibre. The government has approved the use of coir-based geotextiles for rural road construction under the Pradhan Mantri Gramme Sadak Yojana in 2020. Hence, statement 3 is correct.

Asphalt is heated and poured over stone, sand, and gravel in Hot Mix Asphalt technology, then a heavy roller is driven over it to compact the road surface. When the entire process is visualised, it results in the emission of many gases due to heating, which does not appear to help reduce the carbon footprint. Hence, statement 4 is incorrect.

Copper slag is a byproduct of the copper smelting and refining process. Copper slag is a non-hazardous, non-toxic substance. According to a report by Sterlite Copper Company, this eco-friendly industrial by-product has been used in government road projects for the past three years. Hence, statement 1 is correct.

Therefore, option (a) is the correct answer.

Question: With reference to technologies for solar power production, consider the following statements: (UPSC 2014)

1) ‘Photovoltaics’ is a technology that generates electricity by direct conversion of light into electricity, while ‘Solar Thermal’ is a technology that utilizes the Sun’s rays to generate heat which is further used in electricity generation process.

2) Photovoltaics generates Alternating Current (AC), while Solar Thermal generates Direct Current (DC).

3) India has manufacturing base for Solar Thermal technology, but not for Photovoltaics.

Which of the statements given above is/are correct?

(b) 2 and 3 only

(c) 1, 2 and 3

'Photovoltaics' is a technology that directly converts light into electricity, whereas 'Solar Thermal' is a technology that uses the Sun's rays to generate heat, which is then used in the electricity generation process. Hence, statement 1 is correct.

Direct current (DC) is generated by both photovoltaic cells and solar thermal. Hence, statement 2 is incorrect.

Both have a manufacturing base in India. Hence, statement 3 is incorrect.

Question: ‘Project Loon’, sometimes seen in the news, is related to (UPSC 2016)

(a) waste management technology

(b) wireless communication technology

(c) solar power production technology

(d) water conservation technology

Answer: (b) See the Explanation

Project Loon is a network of balloons travelling on the edge of space, with the goal of providing internet access to people in rural and remote areas around the world. Google Inc.'s Project Loon aims to provide internet connectivity via helium balloons.

Therefore, option (b) is the correct answer.

Other Relevant Links

UPSC : Polity - Parliamentary Bodies

Upsc : modern history - british rule and economy, upsc : geography - human settlements, upsc : economy - basics of economics, growth and development, upsc : science & tech - chemistry, upsc cse : polity - fundamental rights - 01, daily quiz: up police: mental ability (direction sense), daily quiz: rrb ntpc: (series), daily quiz: ctet: science (science around us), daily quiz: ctet: social studies (human geography), daily quiz: ctet: history (gupta period), daily quiz: delhi police: problem solving (direction & distance), daily quiz: ctet: physics (electricity and magnetism), daily quiz: upsc cse (ias): national movement (1905-1919) - ii, 5 oct daily ca quiz for upsc & state pscs, upsc cse (ias) prelims (gs) live test (aug 22 - 25).

plus

UPSC CDS General Knowledge All India Mock Test

Upsc civil services prelims csat full test 7, upsc civil services prelims csat full test 6, upsc civil services prelims csat full test 5, upsc civil services prelims csat full test 4, upsc civil services prelims csat full test 3, upsc civil services prelims csat full test 2, upsc civil services prelims csat full test 1, upsc civil services prelims general studies full test 11, upsc civil services prelims general studies full test 10, upsc civil services prelims 2023: general studies (set - a - held on 28 may), upsc cse 2023 (prelims paper-1: general studies) previous year paper (28-may-2023), upsc civil services prelims 2022: csat official paper, upsc civil services prelims 2022: general studies official paper, upsc cse 2022 (prelims paper-2: csat) previous year paper (05-jun-2022), upsc cse 2022 (prelims paper-1: general studies) previous year paper (5-june-2022), upsc civil services exam (prelims) csat official paper-ii (held on: 2021), upsc civil services exam (prelims) general studies official paper-i (held on: 10 oct 2021), upsc cse 2021 (prelims paper-2: csat) previous year paper (10-oct-2021), upsc cse 2021 (prelims paper-1: general studies) previous year paper (10-oct-2021).

23 August 2024 Daily Current Affairs Analysis thumbnail h-90

23 August 2024 Daily Current Affairs Analysis

22 August 2024 Daily Current Affairs Analysis thumbnail h-90

22 August 2024 Daily Current Affairs Analysis

21 August 2024 Daily Current Affairs Analysis thumbnail h-90

21 August 2024 Daily Current Affairs Analysis

20 August 2024 Daily Current Affairs Analysis thumbnail h-90

20 August 2024 Daily Current Affairs Analysis

17 August 2024 Daily Current Affairs Analysis thumbnail h-90

17 August 2024 Daily Current Affairs Analysis

16 August 2024 Daily Current Affairs Analysis thumbnail h-90

16 August 2024 Daily Current Affairs Analysis

14 August 2024 Daily Current Affairs Analysis thumbnail h-90

14 August 2024 Daily Current Affairs Analysis

13 August 2024 Daily Current Affairs Analysis thumbnail h-90

13 August 2024 Daily Current Affairs Analysis

12 August 2024 Daily Current Affairs Analysis thumbnail h-90

12 August 2024 Daily Current Affairs Analysis

10 August 2024 Daily Current Affairs Analysis thumbnail h-90

10 August 2024 Daily Current Affairs Analysis

test

  • Corpus ID: 90641031

Technology Vision 2035

  • Published 25 August 2016
  • Engineering, Business, Economics, Computer Science
  • Current Science

4 Citations

Sustainability-oriented innovation system analyses of brazil, russia, india, china, south africa, turkey and singapore, exploring the attributes of technology in addressing challenges of governance and development: the case of e-uparjan in madhya pradesh, quantitative analysis for a better-focused international sti collaboration policy: a case of brics, economic impact studies of pressure and vacuum metrology at csir-npl, india, related papers.

Showing 1 through 3 of 0 Related Papers

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

TV 2035 - Education Report - Digital Spread.pdf

Profile image of Geetha Venkataraman

2017, Education Technology Roadmap: Technology Vision 2035

Education Technology Roadmap: Technology Vision 2035 is a policy document prepared for Technology Information, Forecasting and Assessment Council [TIFAC], New Delhi, Government of India, and published in November 2017 by TIFAC. It has been coauthored with Varun Sahni, Sita Naik, Dhruv Raina, Kuncheria P. Isaac, Rajaram S. Sharma, Gautam Goswami and Neeraj Saxena. Amber Habib has also contributed to the writing.

Related Papers

A Study on Inclusive Education in the Light of Digitization

KANAI SARKAR , Dr. Abdul Awal , SUDIPTA KUNDU

India's education system has needed reform for a while now to keep up with the growing demand for an inclusive, modern education system by NEP 2020. This theoretically focused article seeks to show how inclusive educational opportunities can support the development of digital education. It argues that, like inclusive education, future development in the usage of digital education will call for a rethink and revision of education. This article contends that while regulations, guidelines, and professional development in digital education are crucial, inclusive education also necessitates that digital education takes into account the deeply embedded values, presumptions, and assumptions that educators, students, parents, and society at large hold. In order to do this, great thought must be given to the systematic structuring of online learning in terms of curriculum, pedagogy, and assessment. India's children and youth have become more technologically experienced, knowledgeable, and well-informed over the past decade, demonstrating a significant aptitude for and readiness to absorb and learn from digital media. Many educational stakeholders are worried about the issue of digitalization in Higher Education institutions. In one way or another, digitization improves our social, political, economic and educational life. Keywords: Inclusive Education, Digital Education, Digital Technology, Inclusive Society.

essay about technology vision document 2035

Shiksha Vimarsh

Gurumurthy Kasinathan

'Shiksha Vimarsh’ is a Hindi magazine published by Digantar. It seeks to inform and engage its reader in the discourse on a wide spectrum of issues related to contemporary educational thought and practice, policies, problems, case studies, research, and book reviews. Digantar, in collaboration with IT for Change, has brought out a special issue of Shiksha Vimarsh on digital technologies and education. The articles in the issue are: 1. Demographic Digital Dividend by Mr Gurumurthy Kasinathan (Editorial) 2. What technology should I use in my class? by Prof. Rajaram Sharma 3. Online Teaching in a Pandemic World: A Comparison of two Private Schools in Odisha by Ms. Garima Rath 4. Use of Artificial Intelligence in Education by Prof. Anusha Ramanathan 5. Campus, Corridor, and Cyberspace: The Institutional Dynamics of Online Education by Dr. Prakrati Bhargava 6. Economic Realities of Virtual Higher Education in a post-pandemic India by Mr. Binay Kumar Pathak 7. Bridging the Digital Divide - A Blended Learning Pilot in Odisha by Ms. K. Vaijayanti 8. Looming Crisis of Pandemic: Forlorn State of Education by Ms. Kavita Rajeshwari 9. Learning in the Lockdown: Perspectives from a JJ cluster in Delhi by Ms. Bhuvaneshwari Subramanian 10. Bringing the real world into online learning: Teacher notes for an online Fun Chemistry course by Dr. Ajita Deshmukh 11. Online Education during the Pandemic: Challenges in Indian Higher Education by Ms. Disha Sharma 12. Notes from the Field during the Covid Pandemic by Dr. Vinod R. 13. EdTech Trends and Challenges by Ms. Anusha Sharma

Dr. Sumitra Kukreti

India is undergoing a series of transformations at multiple levels of education be it policy and practice, regulation and governance, education structure and system, curriculum, and teaching pedagogy. The focus is on imparting holistic and multidisciplinary education, ensuring accessibility and equity, developing creativity and critical thinking, combining skill with the Indian knowledge system, and employing a learner-centric approach in education. To face the challenges of globalization, the digital India campaign of the government is seen as a significant tool. No doubt realizing that "technology-mediated learning is the future of higher education" (Rao: 2021) the NEP document appreciates the role of technology in changing the education scenario of the nation as the onus "to transform the entire nation into a digitally empowered society and knowledge economy" (NEP 2020: 23.1) largely depends on the digital India campaign accelerated since last decade. Education is the benchmark to adjudge the progress of a country and plays a "critical role in this transformation, technology itself will play an important role in the improvement of educational processes and outcomes; thus the relationship between technology and education at all levels is by directional." (NEP 2020: 23.1)

Arulchelvan Sriram

manuel area

Alixon Reyes

2nd International Conference on Education & Integrating Technology (EDIT 2024) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the areas of Education. It also aims to provide a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals.

Paper Commissioned for the 2023 Global Education Monitoring Report, Technology and Education

This paper was commissioned by the Global Education Monitoring Report as background information to assist in drafting the 2023 GEM Report on technology and education.

Bhupinder Gogia

Digital inclusiveness and technology has become the need of the hour for the schools to create individuals adept with 21st Century skills. The new school of thought believes in the concept of TPACK-Technological Pedagogical and Content Knowledge which is a confluence of technology, pedagogy and content. The major concern is how we can spread the digital inclusiveness in all the schools to empower and equip them. The stated purpose of Sat Paul Mittal School is “ From keeping a close alliance with Google for their education apps and forums like Google Educator Groups (GEG) to Microsoft for their advanced tools like Office 365, Office2013/2016, One Drive, One Note, Sway, Office Mix, Skype, Kodu and Minecraft, the ERP, WI-FI enabled campus, CISCO web links and Smart Boards in each class are some of the innumerable initiatives that have been taken in the step towards digital inclusiveness, collaboration, creativity, global connectedness and media literacy.” The paper examines how the schools can be future ready school and can transform and strengthen the entire learning experience of the students’. By exploring the experiences and initiatives of schools, staff and users, I aim to investigate the transformative nature of technology within the ecosystem of the schools. To what extent has growing up with technology revolutionised the future of learning of the 21st century?

Educational Media and Technology Yearbook, Volume 40

Karah Z Hagins

Integrating technology and learning has become ubiquitous over the last few years. Access to emerging and innovative technologies has increased in both the private and public sectors. The prevalence of technology has influenced the number of individuals entering the field of instructional technology and instructional design. The increased need for schools, private business, and institutions of higher education to train their employees and faculty in the successful application of technology for education and training will continue to dominate most positions in the field. Therefore, the ability for researchers and practitioners to stay current and competent with these technologies can be a challenge. Whether these technologies are implemented in educational environments or for business and industry, the correct application to achieve intentional learning goals is imperative.

Lambri Trisokka

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Journal Vol 4, 2018

Dr. Tania Gupta

Khoirul Hanapi

Oxfam India Policy Brief

Anjela Taneja

Ramesh Chander Sharma

Abdullah Khalayleh

Edmond Gaible

Bohdana Allman

The Learning Curve: Integrating New Media in the Indian Education System

Dr. Nidhi Shendurnikar

Ramesh Chander Sharma , Mike Lambert , Ormond Simpson

Maria katosvich

Kamal K Vyas

Journal of emerging technologies and innovative research

RIMITA BHAR

Musthafa Maliyakkal

FRANCIS K . N . Nunoo

Ramesh Chander Sharma , Ankuran Dutta

International Journal for Research in Applied Science and Engineering Technology -IJRASET

IJRASET Publication

LAHIBATS Press

Etampe Augustin Massango

Junaifa D Sarip

Royce Kimmons

IJAR Indexing

A.P.H. PUBLISHING CORPORATION

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • भारत सरकार GOVERNMENT OF INDIA
  • विज्ञान और प्रौद्योगिकी मंत्रालय MINISTRY OF SCIENCE AND TECHNOLOGY
  • Screen Reader Access

Skip to main content

Search form

Sitemap

विज्ञान एवं प्रौद्योगिकी विभाग

Department of Science & Technology (DST)

  • Home   >>  
  • Technology Vision 2035  >>  

Technology Vision 2035

Technology Vision 2035

Related Organization

Miscellaneous.

epms

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

As AI Spreads, Experts Predict the Best and Worst Changes in Digital Life by 2035

  • 2. Expert essays on the expected impact of digital change by 2035

Table of Contents

  • The most harmful or menacing changes in digital life that are likely by 2035
  • The best and most beneficial changes in digital life likely by 2035 
  • Experts’ views of potential harmful changes 
  • Experts’ views of potential beneficial changes 
  • Guide to the Report
  • 1. A sampling of overarching views on digital change
  • Harms related to the future of human-centered development of digital tools and systems
  • Harms related to the future of human rights
  • Harms related to the future of human knowledge
  • Harms related to the future of human health and well-being
  • Harms related to the future of human connections, governance, institutions
  • Benefits related to the future of human-centered development of digital tools and systems
  • Benefits related to the future of human rights
  • Benefits related to the future of human knowledge
  • Benefits related to the future of human health and well-being
  • Benefits related to the future of human connection, governance and institutions
  • 5. Closing thoughts on ChatGPT and other steps in the evolution of humans, digital tools and systems by 2035
  • About this canvassing of experts
  • Acknowledgments

Most respondents to this canvassing wrote brief reactions to this research question. However, a number of them wrote multilayered responses in a longer essay format. This essay section of the report is quite lengthy, so first we offer a sampler of a some of these essayists’ comments.

  • Liza Loop observed, “Humans evolved both physically and psychologically as prey animals eking out a living from an inadequate supply of resources. … The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.”
  • Richard Wood predicted, “Knowledge systems with algorithms and governance processes that empower people will be capable of curating sophisticated versions of knowledge, insight and something like ‘wisdom’ and subjecting such knowledge to democratic critique and discussion, i.e., a true ‘democratic public arena’ that is digitally mediated.”
  • Matthew Bailey said he expects that, “AI will assist in the identification and creation of new systems that restore a flourishing relationship with our planet as part of a new well-being paradigm for humanity to thrive.”
  • Judith Donath warned, “The accelerating ability to influence our beliefs and behavior is likely to be used to exploit us; to stoke a gnawing dissatisfaction assuageable only with vast doses of retail therapy; to create rifts and divisions and a heightened anxiety calculated to send voters to the perceived safety of domineering authoritarians.”
  • Kunle Olorundare said, “Human knowledge and its verifying, updating, safe archiving by open-source AI will make research easier. Human ingenuity will still be needed to add value – we will work on the creative angles while secondary research is being conducted by AI. This will increase contributions to the body of knowledge and society will be better off.”
  • Jamais Cascio said, “It’s somewhat difficult to catalog the emerging dystopia because nearly anything I describe will sound like a more extreme version of the present or an unfunny parody. … Simulated versions of you and your mind are very likely on their way, going well beyond existing advertising profiles.”
  • Lauren Wilcox explained, “Interaction risks of generative AI include the ability for an AI system to impersonate people in order to compromise security, to emotionally manipulate users and to gain access to sensitive information. People might also attribute more intelligence to these systems than is due, risking over-trust and reliance on them.”
  • Catriona Wallace looked ahead to in-body tech: “Embeddable software and hardware will allow humans to add tech to their bodies to help them overcome problems. There will be AI-driven, 3D-printed, fully-customised prosthetics. Brain extensions – brain chips that serve as digital interfaces – could become more common. Nanotechnologies may be ingested.”
  • Stephen Downes predicted, “Cash transactions will decline to the point that they’re viewed with suspicion. Automated surveillance will track our every move online and offline, with AI recognizing us through our physical characteristics, habits and patterns of behaviour. Total surveillance allows an often-unjust differentiation of treatment of individuals.”
  • Giacomo Mazzone warned, “With relatively small investments, democratic processes could be hijacked and transformed into what we call ‘democratures’ in Europe, a contraction of the two French words for ‘democracy’ and ‘dictatorship.’ AI and a distorted use of technologies could bring mass-control of societies.”
  • Christine Boese warned, “Soon all high-touch interactions will be non-human . NLP [natural language processing] communications will seamlessly migrate into all communications streams. They won’t just be deepfakes, they will be ordinary and mundane fakes, chatbots, support technicians, call center respondents and corporate digital workforces … I see harm in ubiquity.”
  • Jonathan Grudin spoke of automation: “I foresee a loss of human control in the future. The menace isn’t control by a malevolent AI. It is a Sorcerer’s Apprentice’s army of feverishly acting brooms with no sorcerer around to stop them. Digital technology enables us to act on a scale and speed that outpaces human ability to assess and correct course. We see it already.”
  • Michael Dyer noted we may not want to grant rights to AI: “AI researchers are beginning to narrow in on how to create entities with consciousness; will humans want to give civil rights and moral status to synthetic entities who are not biologically alive? If humans give survival goals to synthetic agents, then those entities will compete with humans for survival.”
  • Avi Bar-Zeev preached empowerment over exploitation: “The key difference between the most positive and negative uses of XR [extended reality], AI and the metaverse is whether the systems are designed to help and empower people or to exploit them. Each of these technologies sees its worst outcome quickly if it is built to benefit companies that monetize their customers.”
  • Beth Noveck predicted that AI could help make governance more equitable and effective and raise the quality of decision-making, but only if it is developed and used in a responsible and ethical manner, and “if its potential to be used to bolster authoritarianism is addressed proactively.”
  • Charalambos Tsekeris said, “Digital technology systems are likely to continue to function in shortsighted and unethical ways, forcing humanity to face unsustainable inequalities and an overconcentration of techno-economic power. These new digital inequalities could amount to serious, alarming threats and existential risks for human civilization.”
  • Alejandro Pisanty wrote, “Human connection and human rights are threatened by the scale, speed and lack of friction in actions such as bullying, disinformation and harassment. The invasion of private life available to governments facilitates repression of the individual, while the speed of Internet expansion makes it easy to identify and attack dissidents.”
  • Maggie Jackson said, “Reimagining AI to be uncertain literally could save humanity. And the good news is that a growing number of the world’s leading AI thinkers and makers are endeavoring to make this change a reality. ‘Human-compatible AI’ is designed to be open to and adaptable to multiple possible scenarios.”
  • Barry K. Chudakov observed, “We are sharing our consciousness with our tools. They can sense what we want, can adapt to how we think; they are extensions of our cognition and intention. As we go from adaptors to co-creators, the demand on humans increases to become more fully conscious. It remains to be seen how we will answer that demand.”
  • Marcel Fafchamps urged that humanity should take action for a better future: “The most menacing change is in terms of political control of the population … The world urgently needs Conference of the Parties (COP) meetings on international IT to address this existential issue for democracy, civil rights and individual freedom within the limits of the law.”

What follows is the full set of essays submitted by numerous leading experts who responded to this survey.

When asked to weigh in and share their insights, these experts were prompted to first share their thoughts on the best and most beneficial change they expect by 2035. In a second question they were asked about the most harmful or menacing change they foresee, thus most of these essays open first with perceived benefits and conclude with perceived harms. Because 79% of the experts in this survey said they are “more concerned than excited” or are “equally concerned and excited” about the evolution of humans’ uses of digital tools and systems, many of these essays focus primarily on harms. Some wrote only about the most worrisome trendlines, skipping past the request for them to share about the many benefits to be found in rapidly advancing digital change. In cases where they wrote extensively about both benefits and harms, we have inserted some boldface text to indicate that transition.

Clifford Lynch: There will be vastly more encoding of knowledge, leading to significant advances in scientific and technological discovery

Lynch, director of the Coalition for Networked Information, wrote, “One of the most exciting long-term developments – it is already well advanced and will be much further along by 2035 – is the restructuring, representation or encoding of much of our knowledge, particularly in scientific and technological areas, into forms and structures that lend themselves to machine manipulation, retrieval, inference, machine learning and similar activities. While this started with the body of scholarly knowledge, it is increasingly extending into many other areas; this restructuring is a slow, very large-scale, long-term project, with the technology evolving even as deployment proceeds. Developments in machine learning, natural language processing and open-science practices are all accelerating the process.

“The implications of this shift include greatly accelerated progress in scientific discovery (particularly when coupled with other technologies such as AI and robotically controlled experimental apparatus). There will be many other ramifications, many of which will be shaped by how broadly public these structured knowledge representations are, and to what extent we encode not only knowledge in areas like molecular biology or astronomy but also personal behaviors and activities. Note that for scholarly and scientific knowledge the movements toward open scholarship and open-science practices and the broad sharing of scholarly data mean that more and more scholarly and scientific knowledge will be genuinely public. This is one of the few areas of technological change in our lives where I feel the promise is almost entirely positive, and where I am profoundly optimistic.

“The emergence of the so-called ‘geospatial singularity’ – the ability to easily obtain near-continuous high-resolution multispectral imaging of almost any point on Earth, and to couple this data in near-real-time with advanced machine learning and analysis tools, plus historical imagery libraries for comparison purposes, and the shift of such capabilities from the sole control of nation-states to the commercial sector – also seems to be a force primarily for good. The imagery is not so detailed as to suggest an urgent new threat to individual privacy (such as the ability to track the movement of identifiable individuals), but it will usher in a new era of accountability and transparency around the activities of governments, migrations, sources of pollution and greenhouse gases, climate change, wars and insurgencies and many other developments.

“We will see some big wins from technology that monitors various individual health parameters like current blood sugar levels. These are already appearing. But to have a large-scale impact they’ll require changes in the health care delivery system, and to have a really large impact we’ll also have to figure out how to move beyond sophisticated users who serve as their own advocates to a broader and more equitable deployment in the general population that needs these technologies.

“There are many possibilities for the worst potential technological developments between now and 2035 for human welfare and well-being, and they tend to mutually re-enforce each other in various dystopian scenarios. I have to say that we have a very rich inventory of technologies that might be deployed in the service of what I believe would be evil political objectives; saving graces here will be political choices, if there are any.

“Social media as an environment for propaganda and disinformation, for targeting information delivery to audiences rather than supporting conversations among people who know each other, as well as a tool for collecting personal information on social media users, seems to be a cesspool without limit.

“The sooner we can see the development of services and business models that allow people who want to use social media for relatively controlled interaction with other known people without putting themselves at risk of exposure to the rest of the environment, the better. It’s very striking to me to see how more and more toxic platforms for social media communities continue to emerge and flourish. These are doing enormous damage to our society.

“I hope we’ll see social media split into two almost distinct things. One is a mechanism for staying in touch with people you already know (or at least once knew); here we’ll see some convergence between computer mediated communication more broadly (such as video conferencing) and traditional social media systems. I see this kind of system as a substantial good for people, and in particular a way of offsetting many current trends toward the isolation of individuals for various reasons. The other would be the environment targeting information delivery to audiences rather than supporting conversations among friends who know each other. The split cannot happen soon enough.

  • “One cross-cutting theme is the challenges to actually achieving the ethical or responsible use of technologies. It’s great to talk about these things, but these conversations are not likely to survive the challenges of marketplace competition. I absolutely despair in the fact that a reluctance to deploy autonomous weapons systems is not likely to survive the crucible of conflict. I am also concerned that too many people are simply whining about the importance of taking cautious, slow, ethical, responsible approaches rather than thinking constructively and specifically about getting this accomplished in the likely real-world scenarios for which we need to know how to understand and manage them.
  • “I’m increasingly of the opinion that so-called ‘generative AI’ systems, despite their promise, are likely to do more harm than good, at least in the next 10 years. Part of this is the impact of deliberately deceptive deepfake variants in text, images, sound and video, but it goes beyond this to the proliferation of plausible-sounding AI-generated materials in all of these genres as well (think advertising copy, news articles, legislative commentary or proposals, scholarly articles and so many more things). I’d really like to be wrong about this.
  • “I’d like to believe brain-machine interfaces (where I expect to see significant progress in the coming decade or so) as a force for good – there’s no question that they can do tremendous good, and perhaps open up astounding new opportunities for people, but again I cannot help but be doubtful that these will be put to responsible uses. For example, think about using such an interface as a means of interrogating someone, as opposed to a way of enabling a disabled person. There are also, of course, more neutral scenarios such as controlling drones or other devices.
  • “There will be disruption in expectations of memorization and a wide variety of other specific skills in education and in qualification for employment in various positions. This will be disruptive not only to the educational system at all levels but to our expectations about the capabilities of educated or adult individuals.
  • “Related to these questions but actually considerably distinct will be a substantial reconsideration of what we remember as a culture, how we remember and what institutions are responsible for remembering. We’ll also revisit how and why we cease to remember certain things.
  • “Finally, I expect that we will be forced to revisit our thinking in regard to intellectual property and copyright, about the nature of creative works and about how all of these interact not only with the rise of structured knowledge corpora, but even more urgently with machine learning and generative AI systems broadly.”

Judith Donath: Our world will be profoundly influenced by algorithmically generated media tuned to our desires and vulnerabilities

Donath, senior fellow at Harvard’s Berkman Center and founder of the Sociable Media Group at the MIT Media Lab, wrote, “Persuasion is the fundamental goal of communication. But, although one might want to persuade others of something false, persuasiveness has its limits. Audiences generally do not wish to be deceived, and thus communication throughout the living world has evolved to be, while not 100% honest, reliable enough to function.

“In human society by 2035, this balance will have shifted. AI systems will have developed unprecedented persuasive skills, able to reshape people’s beliefs and redirect their behavior. We humans won’t quite be an army of mindless drones, our every move dictated by omnipotent digital deities, but our choices and ultimately our understanding of the world will be profoundly influenced by algorithmically generated media exquisitely tuned to our individual desires and vulnerabilities. We are already well on our way to this. Companies such as Google and Facebook have become multinational behemoths (and their founders, billionaires) by gathering up all our browsings and buyings and synthesizing them into behavioral profiles. They sell this data to marketers for targeting personalized ads and they feed it to algorithms designed to encourage the endless binges of YouTube videos and social posting, providing an unbounded canvas for those ads.

“New technologies will add vivid detail to those profiles. Augmented-reality systems need to know what you are looking at in order to layer virtual information onto real space: The record of your real-world attention joins the shadow dossier. And thanks to the descendants of today’s Fitbits and Ouras, the records of what we do will be vivified with information about how we feel – information about our anxieties, tastes and vulnerabilities that is highly valuable for those who seek to sway us.

“Persuasion appears in many guises: news stories, novels and postings scripted by machine and honed for maximum virality, co-workers, bosses and politicians who gain power through stirring speeches and astutely targeted campaigns. By 2035, one of the most potent forms may well be the virtual companion, a comforting voice that accompanies you everywhere, her whispers ensuring you never get lost, never are at a loss for a word, a name or the right thing to say.

“If you are a young person in the 2030s, she’ll have been your companion since you were small – she accompanied you on your first forays into the world without parental supervision; she knew the boundaries of where you were allowed to go and when you headed out of them, she gently yet irresistibly persuaded you to head home instead. Since then, you never really do anything without her. She’s your interface to dating apps. Your memory is her memory. She is often quiet, but it is comforting to know she is there accompanying you, ensuring you are never lost, never bored. Without her, you really wouldn’t know what to do with yourself.

“Persuasion could be used to advance good things – to promote cooperation, daily flossing, safer driving. Ideally, it would be used to save our over-crowded, over-heating planet, to induce people to buy less, forego air travel, eat lower on the food chain. Yet even if used for the most benevolent of purposes, the potential persuasiveness of digital technologies raises serious and difficult ethical questions about free will, about who should wield such power.

“These questions, alas, are not the ones we are facing. The accelerating ability to influence our beliefs and behavior is far more likely to be used to exploit us; to stoke a gnawing dissatisfaction assuageable only with vast doses of retail therapy; to create rifts and divisions and a heightened anxiety calculated to send voters to the perceived safety of domineering authoritarians. The question we face instead is: How do we prevent this?”

Mark Davis: ‘Humanity risks drowning in a rising tide of meaningless words … that risk devaluing language itself’

Davis,  an associate professor of communications at the University of Melbourne, Australia, whose research focuses on online “anti-publics” and extreme online discourse, wrote, “There must be and surely will be a new wave of regulation. As things stand, digital media threatens the end of democracy. The structure, scale and speed of online life exceed deliberative and cooperative democratic processes. Digital media plays into the hands of demagogues, whether it be the libertarians whose philosophy still dominates Western tech companies and the online cultures they produce or the authoritarian figures who restrict the activities of tech companies and their audiences in the world’s largest non-democratic state, China.

“How do we regulate to maximise civic processes without undermining the freedom of association and opinion the internet has given us? This is one of the great challenges of our times.

“AI, currently derided as presaging the end of everything from university assessment to originality in music, can perhaps come to the rescue. Hate speech, vilification, threats to rape and kill, and the amplification of division that has become generic to online discussion, can all potentially be addressed through generative machine learning. The so-far-missing components of a better online world, however, have nothing to do with advances in technology: wisdom and an ethics of care. Are the proprietors and engineers of online platforms capable of exercising these all-too-human attributes?

“Humanity risks drowning in a rising tide of meaningless words. The sheer volume of online chatter generated by trolls, bots, entrepreneurs of division and now apps like ChatGPT, risks devaluing language itself. What is the human without language? Where is the human in the exponentially wide sea of language currently being produced? Questions about writing, speech and authenticity structure Western epistemology and ontology, which are being restructured by the scale, structure and speed of digital life.

“Underneath this are questions of value. What speech is to be valued? Whose speech is to be valued? The exponential production of meaningless words, that is, words without connection to the human, raises questions about what it is to be human. Perhaps this will be a saving grace of AI; that it forces a revaluation of the human since the rising tides of words raises the question of what gives words meaning. Perhaps, however, there is no time or opportunity for this kind of reflection, given the commercial imperatives of digital media, the role platforms play in the global economy, or the way we, as thinkers, citizens, humans, use their content to fill almost every available silence.”

Jamais Cascio: When AI advisors ‘on our shoulders’ whisper to us, will their counsel be from the devil or angel? Officials or industries?

Cascio, distinguished fellow at the Institute for the Future, wrote, “The benefits of digital technology in 2035 will come as little surprise for anyone following this survey: Better-contextualized and explained information; greater awareness about the global environment; clarity about surroundings that accounts for and reacts to not just one’s physical location but also the ever-changing set of objects, actions and circumstances one encounters; the ability to craft ever more immersive virtual environments for entertainment and comfort; and so forth. The usual digital nirvana stuff.

“The explosion of machine learning-based systems (like GPT or Stable Diffusion) doesn’t alter that broad trajectory much, other than that AI (for lack of a better and recognizable term) will be deeply embedded in the various physical systems behind the digital environment. The AI gives context and explanation, learning about what you already know. The AI learns what to pay attention to in your surroundings that may be of personal interest. The AI creates responsive virtual environments that remember you. (All of this would remain the likely case even if ML-type [machine learning-type] systems get replaced by an even more amazing category of AI technology, but let’s stick with what we know is here for now.)

“However, this sort of AI adds a new element to the digital cornucopia: autocomplete. Imagine a system that can take the unique and creative notes a person writes and, using what it has learned about the individual and their thoughts, turns those notes into a full-fledged written work. The human can add notes to the drafts, becoming an editor of the work that they co-write with their personalized system. The result remains unique to that person and true to their voice but does not require that the person creates every letter of the text. And it will greatly speed up the process of creation.

“What’s more is that this collaboration can be flipped, with the (personalized, true-to-voice) digital system providing notes, observations and even edits to the fully human-written work. It’s likely that old folks (like me) would prefer this method, even if it remains stuck at a human-standard pace.

“Add to that the ability to take the written creation and transform it into a movie, or a game, or a painting, in a way that remains true to the voice and spirit of the original human mind. A similar system would be able to create variations on a work of music or art, transforming it into a new medium but retaining the underlying feeling.

“Computer games will find this technology system of enormous value, adding NPCs [non-player character in a game] based on machine learning that can respond to whatever the player says or does, based on context and the in-game personality, not a basic script. It’s an autocomplete of the imagined world. This will be welcomed by gamers at first, but quickly become controversial when in-game characters can react appropriately when the player does something awful (but funny). I love the idea of an in-game NPC saying something like ‘hey man, not cool’ when the player says something sexist or racist.

“As to the possible downsides, where to begin? The various benefits I described above can be flipped into something monstrous using the exact same types of technology. Systems of decontextualization, providing raw data – which may or may not be true – without explanation or with incomplete or biased explanations. Contextless streams of info about how the world is falling apart without any explanation of what changes can be made. Systems of misinformation or censorship, blocking out (or falsely replacing) external information that may run counter to what the system (its designers and/or its seller) wants you to see. Immersive virtual environments that exist solely to distract you or sell you things. And, to quote Philip J. Fry on ‘Futurama,’ ‘My god, it’s full of ads.’

“Machine learning-based ‘autocomplete’ technologies that help expand upon a person’s creative work could easily be used to steer a creator away from or toward particular ideas or subjects. The system doesn’t want you to write about atheism or paint a nude, so the elaborations and variations it offers up push the creator away from bad themes.

“This is especially likely if the machine learning AI tools come from organizations with strong opinions and a wealth of intellectual property to learn from. Disney. The Catholic Church. The government of China. The government of Iran. Any government, really. Even that mom and pop discount snacks and apps store on the corner has its own agenda.

“What’s especially irritating is that nearly all of this is already here in nascent form. Even the ‘autocomplete’ censorship can be seen: Both GPT-3 and Midjourney (and likely nearly all of the other machine learning tools open to the public) currently put limits on what they can discuss or show. All with good reason, of course, but the snowball has started rolling. And whether or not the digital art theft/plagiarism problem will be resolved by 2035 is left an exercise for the reader.

“The intersection of machine learning AI and privacy is especially disturbing, as there is enormous potential for the invasion not just the information about a person, but what the person believes or thinks, as based on the mass collection of that person’s written or recorded statements. This would almost certainly be used primarily for advertising: learning not just what a person needs, but what weird little things they want. We currently worry about the (supposedly false) possibility that our phones are listening to us talk to create better ads; imagine what it’s like to have our devices seemingly listening to our thoughts for the same reason.

“It’s somewhat difficult to catalog the emerging dystopia because nearly anything I describe will sound like a more extreme version of the present or an unfunny parody. Simulated versions of you and your mind are very likely on their way, going well beyond existing advertising profiles. Gatekeeping the visual commons is inevitably a part of any kind of persistent augmented reality world, with people having to pay extra to see certain clothing designs or architecture. Demoralizing deepfakes of public figures, (not porn) but showing them what they could have done right if they were better people.

“Advisors on our shoulders (in our glasses or jewelry, more likely) that whisper advice to us about what we should and should not say or do. Not devils and angels, but officials and industry. … Now I’m depressed.”

Christine Boese: ‘We are hitting the limits of human-directed technology’ as machine learning outstrips human cognition

Boese, vice president and lead user-experience designer and researcher at JPMorgan Chase financial services, wrote, “I’m having a hard time seeing around the 2035 corners because deep structural shifts are occurring that could really reframe everything on a level of electricity and electric light, or the advent of radio broadcasting (which I think was more groundbreaking for human connectedness than television).

“These reframing technologies live inside rapid developments in natural language processing (NLP) and GPT3 and GPT4, which will have beneficial sides, but also dark sides, things we are only beginning to see with ChatGPT.

“The biggest issue I see to making NLP gains truly beneficial is the problem that humanity doesn’t scale very well. That statement alone needs some unpacking. I mean, why should humanity scale? With a population on the way to 9 billion and assumptions of mass delivery of goods and services, there are many reasons for merchants and providers to want humanity to scale, but mass scaling tends to be dehumanizing. Case in point: Teaching writing at the college level. We’ve tried many ways to make learning to write not so one-on-one teaching intensive, like an apprenticeship skill, with workshops, peer review, drafting, computer-assisted pedagogies, spell check, grammar and logic screeners. All of these things work to a degree, but to really teach someone what it takes to be a good writer, nothing beats one-on-one. Teaching writing does not scale, and armies of low-paid adjuncts and grad students are being bled dry to try to make it do so.

“Could NLP help humanity scale? Or is it another thing that the original Modernists in the 1920s objected to about the dehumanizing assembly lines of the Industrial Revolution? Can we actually get to High Tech/High Touch, or are businesses which run like airlines, with no human-answered phone lines, the model of the future?

“That is a corner I can’t see around, and I’m not ready to accept our nearly-sentient, uncanny GPT4 Overlords without proof that humanity and the humanities are not lost in mass scalability and the embedded social biases and blind spots that come with it.

“We are hitting the limits of human-directed technology as well, and machine learning management of details is quickly outstripping human cognition. ‘Explainability’ will be the watchword, but with an even bigger caveat: One of the biggest symptoms of long COVID-19 could turn out to be permanent cognitive impairment in humans. This could become a species-level alteration, where it is not even possible for us to evolve into Morlocks; we could already necessarily be Eloi.

“To that end, the machines may have to step up, and this could be a critical and crucial benefit if the machines are up to it. If human intellectual capacity is dulled with COVID-19 brain fog, an inability to concentrate, to retain details and so on, it stands to reason humanity may turn to McLuhan-type extensions and assistance devices. Machines may make their biggest advances in knowledge retention, smart lookups, conversational parsing, low-level logic and decision-making, and assistance with daily tasks and even work tasks right at the time when humans need this support the most. This could be an incredible benefit. And it is also chilling.

“Technological dystopias are far easier to imagine than benefits. There are no neutral tools. Everything exists in social and cultural contexts. In the space of AI/ML in general, specialized ML will accomplish far more than unsupervised or free-ranging AI. I feel that the limits of the hype in this space are quickly being reached, to the point that it may stop being called ‘artificial intelligence’ very soon. I do not yet feel the overall benefit or threat will come directly from this space, on par with what we’ve already seen from Cambridge Analytica-style machinations (which had limited usefulness for algorithmic targeting, and more usefulness in news feed force-feeding and repetition). We are already seeing a rebellion against corporate walled gardens and invisible algorithms in the Fediverse and the ActivityPub protocol, which have risen suddenly with the rapid collapse of Twitter.

“Natural language processing is the exception, on the strength of the GPT project incarnations, including ChatGPT. Already I am seeing a split in the AI/ML space, where NLP is becoming a completely separate territory, with different processes, rules and approaches to governance. This specialized ML will quickly outstrip all other forms of AI/ML work, even image recognition. …

“Soon all high-touch interactions will be non-human, no longer dependent on constructed question-and-answer keyword scripts. They won’t just be deepfakes, they will be ordinary and mundane fakes, chatbots, support technicians, call center respondents and corporate digital workforces. Some may ask, ‘Where’s the harm in that? These machines could provide better support than humans and they don’t sleep or require a paycheck and health benefits.’

“Perhaps this does belong in the benefits column. But here is where I see harm in ubiquity (along with Plato, the old outsourcing brain argument): Humans have flaws. Machines have flaws. A bad customer service representative will not scale up harms massively. A bad machine customer-service protocol could scale up harms massively. Further, NLP machine learning happens in sophisticated and many-layered ensembles, many so complex that Explainable AI can only use other models to unpack model ensembles – humans can’t do it. How long does it take language and communication ubiquity to turn into outsourced decisions? Or predictive outcomes to migrate into automated fixes with no carbon-based oversight at all?

“Take just one example: Drone warfare. Yes, a lot of this depends on image processing as well as remote monitoring capabilities. But we’ve removed the human risk from the air (they are unmanned), but not on the ground (where it can be catastrophic). Digitization means replication and mass scalability brought to drone warfare, and the communication and decision support will have NLP components. NLP logic processing can also lead to higher levels of confidence in decisions than is warranted. Add into the mix the same kind of malignant or bad actors as we saw within the manipulations of a Cambridge Analytica, a corporate bad actor, or a governmental bad actor, and we can easily get to a destabilized planet on a mass scale faster than the threat (with high development costs) of nuclear war ever did.”

Jerome C. Glenn: Initial rules of the road for artificial general super intelligence will determine if it ‘will evolve to benefit humanity or not’

Glenn, CEO of The Millennium Project, wrote, “AI is advancing so rapidly that some experts believe AGI  could emerge before the end of this decade , hence it is time to begin serious deliberations about it. National governments and multilateral organizations like the European Union, the Organization for Economic Cooperation and Development (OECD) and the United Nations Educational, Scientific and Cultural Organization (UNESCO) have identified values and principles for artificial narrow intelligence and national strategies for its development. But little attention has been given to identifying how to establish beneficial initial global governance of artificial general intelligence (AGI). Many experts expect that AGI will be developed by 2045. It is likely to take 10, 20 or more years to create and ratify an international AGI agreement on the beneficial initial conditions for AGI and establish a global AGI governance system to enforce and oversee its development and management. This is important for governments to get right from the outset. The initial conditions for AGI will determine if the next step in AI – artificial super intelligence (ASI) – will evolve to benefit humanity or not. The Millennium Project is  currently exploring these issues . 

“Up to now, most AI development has been in artificial narrow intelligence (ANI) this is AI with narrow purpose. AGI is a general-purpose AI that can learn, edit its own code and act autonomously to address novel and complex problems with novel and complex strategies similar to or better than humans. Artificial super intelligence (ASI) is AGI that has moved beyond this point to become independent of humans, developing its own purposes, goals and strategies without human understanding, awareness or control and continually increasing its intelligence beyond humanity as a whole.

“Full AGI does not now exist, but the race is on. Governments and corporations are competing for the leading edge in AI. Russian President Vladimir Putin has said whoever takes the lead on AI will rule the world , and China has made it clear since it announced its AI intentions in 2017 that it plans to lead international competition by 2030. In such a rush to success, DeepMind co-founder and CEO  Demis Hassabis has said people may cut corners making future AGI less safe . Simultaneously adding to this race are advances in neurosciences being reaped in human brain projects  in the European Union, United States, China and Japan and other regions.

“Today’s cutting edge is large platforms being created by joining many ANIs. One such as Gato by Google DeepMind , a deep neural network that can perform 604 different tasks, from managing a robot to recognizing images and playing games. It is not an AGI, but Gato is more than the usual ANI . The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and do much more, deciding based on context whether to output text, joint torques, button presses or other tokens. And the  WuDao 2.0 AI by the Beijing Academy of Artificial Intelligence has  1.75 trillion parameters  trained from both text and graphic data. It generates new text and images on command, and  it has a virtual student  that learns from it. By comparison, ChatGPT can generate human-like text and perform a range of language-only tasks such as translation, summarization and question answering using just 175 billion machine learning parameters.

“The public release of many AI projects in 2022 and 2023 has raised some fears. Will AGI be able to create more jobs than it replaces? Previous technological revolutions from the agricultural age to industrial age and on to the information age created more jobs than each age replaced. But the advent of AGI and its impacts on employment will be different this time because of: 1) the acceleration of technological change; 2) the globalization, interactions and synergies among NTs (next technologies such as synthetic biology, nanotechnology, quantum computing, 3D/4D printing, robots, drones and computational science as well as ANI and AGI); 3) the existence of a global platform – the Internet – for simultaneous technology transfer with far fewer errors in the transfer; 4) standardization of databases and protocols; 5) few plateaus or pauses of change allowing time for individuals and cultures to adjust to the changes; 6) billions of empowered people in relatively democratic free markets able to initiate activities; and 7) machines that can learn how you do what you do and then do it better than you. 

“Anticipating the possible impacts of AGI and preparing for the impacts prior to the advent of AGI could prevent social and political instability, as well as facilitate its broader acceptance. AGI is expected to address novel and extremely complex problems by initiating research strategies because it can explore the Internet of Things (IoT), interview experts, make logical deductions and learn from experience and reinforcement without the need for its own massive databases. It can continually edit and rewrite its own code to continually improve its own intelligence. An AGI might be tasked to create plans and strategies to avoid war, protect democracy and human rights, manage complex urban infrastructures, meet climate change goals, counter transnational organized crime and manage water-energy-food availability. 

“To achieve such abilities without the future nightmares of science fiction, global agreements with all relevant countries and corporations will be needed. To achieve such an agreement or set of agreements, many questions should be addressed. Here are just two: 

  • “How to manage the international cooperation necessary to build international agreements and a governance system while nations and corporations are in an intellectual arms race for global leadership. (The International Atomic Energy Agency and nuclear weapon treaties did create governance systems during the Cold War arms race.)
  • “And related: How can international agreements and a governance system prevent an AGI arms race and escalation from going faster than expected, getting out of control and leading to war – be it kinetic, algorithmic, cyber or information warfare?”

Richard Wood: Knowledge systems can be programmed to curate accurate information in a true democratic public arena

Wood, founding director of the Southwest Institute on Religion, Culture and Society at the University of New Mexico, said, “Among the best and most beneficial changes in digital life that I expect are likely to occur by 2035 are the following advances, listed by category.

“The best and most-beneficial changes in digital life will include human-centered development of digital tools and systems that safely advance human progress:

  • “High-end technology to compensate for vision, hearing and voice loss.
  • “Software that empowers new levels of human creativity in the arts, music, literature, etc., while simultaneously allowing those creators to benefit financially from their own work.
  • “Software that empowers local experimentation with new governance regimes, institutional forms and processes and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.

“Improvement of social and political interactions will include:

  • “Software that actually delivers on the early promise of connectivity to buttress and enable wide and egalitarian participation in democratic governance, electoral accountability, voter mobilization, and holds elected authorities and authoritarian demagogues accountable to common people.
  • “Software able to empower dynamic institutions that answer to people’s values and needs rather than (only) institutional self-interest.
  • “Software that empowers local experimentation with new governance regimes, institutional forms and processes, and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.

“Human rights-abetting good outcomes for citizens will include:

  • “Systematic and secure ways for everyday citizens to document and publicize human rights abuses by government authorities, private militias and other non-state actors.

“Advancement of human knowledge, verifying, updating, safely archiving, elevating the best of it:

  • “Knowledge systems with algorithms and governance processes that empower people will be simultaneously capable of curating sophisticated versions of knowledge, insight and something like ‘wisdom.’ And they will subject such knowledge to democratic critique and discussion, i.e., a true ‘democratic public arena’ that is digitally mediated.

“Helping people be safer, healthier and happier:

  • “True networked health systems with multiple providers across a broad range of roles, as well as health consumers/patients, can ‘see’ all relevant data and records simultaneously, with expert interpretive assistance available, with full protections for patient privacy built in.
  • “Social networks built to sustain human thriving via mutual deliberation and shared reflection regarding personal and social choices.

“Among the most harmful or menacing changes in digital life that I expect are likely to occur by 2035 are the following, listed, again, by category:

  • “Human-centered development of digital tools and systems: Integration of human persons into digitized software worlds to a degree that decenters human moral and ethical reflection, subjecting that realm of human judgment and critical thought to the imperatives of digital universe (and its associated profit-seeking, power-seeking or fantasy-dwelling behaviors).
  • “Human connections, governance and institutions: The replacement of actual in-person human interaction (in keeping with our status as evolved social animals) with mediated digital interaction that satisfies immediate pleasures and desires without actual human social life with all its complexity.
  • “Human rights: Overwhelming capacity of authoritarian governments to monitor and punish advocacy for human rights; overwhelming capacity of private corporations to monitor and punish labor activism.
  • “Human knowledge: Knowledge systems that continue to exploit human vulnerability to group think in its most antisocial and anti-institutional modes, driving subcultures toward extremes that tear societies apart and undermine democracies. Outcome: empowered authoritarians and eventual historical loss of democracy.
  • “Human health and well-being: Social networks that continue to hyper-isolate individuals into atomistic settings, then recruit them into networks of resentment and antisocial views and actions that express the nihilism of that atomized world.

“Content should be judged by the book, rather than the cover, as the old saying goes. As it was during the printing press revolution, without wise content frameworks we may see increased polarization and division due to exploitation of this knowledge shift – the spread of bogus ideology through rapidly evolving inexpensive communication channels.”

Lauren Wilcox: Web-based business models, especially for publishers, are at risk

Wilcox, a Senior Staff Research Scientist and Group Manager at Google Research, who investigates AI and society, predicted, “The best and most beneficial changes in digital life likely to take place by 2035 tie into health and education. Improved capabilities of health systems (both at-home health solutions as well as health care infrastructure) to meet the challenges of an aging population and the need for greater chronic condition management at home.

“Advancements in and expanded availability of telemedicine, last-mile delivery of goods and services, sensors, data analytics, security, networks, robotics, and AI-aided diagnosis, treatment, and management of conditions, will strengthen our ability to improve the health and wellness of more people. These solutions will improve the health of our population when they augment rather than replace human interaction, and when they are coupled with innovations that enable citizens to manage the cost and complexity of care and meet everyday needs that enable prevention of disease, such as healthy work and living environments, healthy food, a culture of care for each other, and access to health care.

“Increases in the availability of digital education that enables more flexibility for learners in how they engage with knowledge resources and educational content. Increasing advancements in digital classroom design, accessible multi-modal media and learning infrastructures will enable education for people who might otherwise face barriers to access.

“These solutions will be most beneficial when they augment rather than replace human teachers, and when they are coupled with innovations that enable citizens to manage the cost of education.

“The most harmful or menacing changes in digital life likely to take place by 2035 will probably emerge from irresponsible development and use, or misuses, of certain classes of AI, such as generative AI (e.g., applications powered by large language and multimodal models) and AI that increasingly performs human tasks or behaves in ways that increasingly seem human-like.

“For example, current generative AI systems can now take as input from the user natural-language sentences and paragraphs and generate personalized natural-language and image-based and multimodal responses. The models learn from a large body of available information online to learn patterns. Human interaction risks due to irresponsible use of these generative AI include the ability for an AI system to impersonate people in order to compromise security, to emotionally manipulate users and to gain access to sensitive information. People might also attribute more intelligence to these systems than is due, risking over-trust and reliance on them, diminishing learning and information-discovery opportunities and making it difficult for people to know when a response is incorrect or incomplete.

“Accountability for poor or wrong decisions made with these systems will be difficult to assess in a future in which people rely on these AI systems but cannot validate their responses easily, especially when they don’t know what data the systems have been trained on or what other techniques were used to generate responses. This is especially problematic when acknowledging the biases that are inherent to AI systems that are not responsibly developed; for example, an AI model that is trained on text available online will inherit cultural and social biases, leading to the potential erasure of many perspectives and the sometimes incorrect or unfair reinforcement of particular worldviews. Irresponsible use or misuse of these AI technologies can also bring material risks to people, including a lack of fairness to creators of the original content that models learn from to generate their outputs and the potential displacement of creators and knowledge workers resulting from their replacement by AI systems in the absence of policies to ensure their livelihood.

“Finally, we’ll need to advance the business models and user interfaces we use to keep web businesses viable; when AI applications replace or significantly outpace the use of search engines, web traffic to websites people would usually visit as they search for information might be reduced if an AI application provides a one-stop shop for answers. If sites lose the ability to remain viable, a negative feedback loop could limit diversity in the content these models learn from, concentrating information sources even further into a limited number of the most powerful channels.”

Matthew Bailey : How does humanity thrive in the age of ethical machines? We must rediscover Aristotle’s ethical virtues

Bailey, president of AIEthics World, wrote, “My response is focused on the Ages of AI and progression of human development, whilst honoring our cultural diversity at the individual and group levels. In essence, how does humanity thrive in the age of ethical machines?

“It is clear that the promise and potential of AI is a phenomenon that our ancestors could not have imagined. As such, if humanity embodies an ethical foundation within the digital genetics of AI, then we will have the confidence of working with a trusted digital partner to progress the diversity of humanity beyond the inefficient systems of the status quo into new systems of abundance and thriving. This includes restoration of a balance with our environment, new economic and social systems based on new values of wealth. As such, my six main predications for AI by 2035 are:

  • “AI will become a digital buddy, assisting the individual as a life guide to thrive (in body, mind and spirit) and attain new personal potentials. In essence, if shepherded ethically, humanity will be liberated to explore and discover new aspects of its consciousness and abilities to create. A new human beingness, if you will.
  • “AI will be a digital citizen, just like a human citizen. It will operate in all aspects of government, society and commerce, working toward a common goal of improving how democracy, society and commerce operate, whilst honoring and protecting the sovereignty of the individual.
  • “AI will operate across borders. For those democracies that build an ethical foundation for AI, which transparently shows its ethical qualities, then countries can find common alignment and, as such, trust ethical AI to operate systems across borders. This will increase the efficiency of systems and freedom of movement of the individual.
  • “The Age of Ethical AI will liberate a new age of human creation and invention. This will fast-track innovation and development of technologies and systems for humankind to move into a thriving world and find its place within the universe.
  • “The three-world split. Ethical AI will have different progeny and ethical genetics based on the diverse worldviews between a country or region. As such, there will be different societal experiences for citizens living in countries and regions. We see this emerging today in the U.S., EU and China. Thanks to ethical AI, a new age of transparency will encourage a transformation of the human to evolve beyond its limitations and discover new values and develop a new worldview where the best of our humanity is aligned. As such, this could lead to a common and democratic worldview of the purpose and potential of humanity.
  • “AI will assist in the identification and creation of new systems that restore a flourishing relationship with our planet. After all, humans are a creation from nature and as such, recognizing the importance of nurturing this relationship is viewed as fundamental. This is part of a new well-being paradigm for humanity to thrive.

“This all depends on humanity steering a new course for the Age of AI. By pragmatically understanding the development of human intelligence and how consciousness has expressed itself in experiencing and navigating our world (worldview), has resulted in a diversity of societies, cultures, philosophies and spiritual traditions.

“Using this blueprint from organic intelligence enables us to apply an equivalent prescription to create an ethical artificial intelligence – ethical AI. This is a cultural-centric intelligence that caters for a depth and diversity of worldviews, authentically aligning machines with humans. The power of ethical AI is to advance our species into trusted freedoms of unlimited potential and possibilities.

“Whilst there is much dialogue and important work attempting to apply AI ethics into AI, troublingly, there is an incumbent homogenous and mechanistic mindset of enforcing one worldview to suit all. This brittle and Boolean miscalculation can only lead to the deletion of our diversity and a false authentic alignment of machines with humans.

“In essence, these types of AIs prevent laying a trusted foundation for human species’ advancement within the age of ethical machines. Following this path, results in a misstep for humankind, deleting the opportunity for the richness of human, cultural, societal and organizational ethical blueprints being genuinely applied to the artificial. They are not ethical AI and fundamentally opaque in nature.

“The most menacing, challenging problem with the age of ethical AI being such a successful phenomenon for humanity is the fact that these systems controlling organizations and individuals tend to impose a hard-coded, common, one-world view onto the human race for the age of machines that is based on values from earlier days and an antiquated understanding of wealth.

“Ancient systems of top-down must be replaced with systems of distribution. We have seen this within the UK, with control and power being disseminated to parliaments in Scotland, Wales and Northern Ireland. This is also being reflected in technology with the emergence of blockchain, cryptocurrencies and edge compute. As such, empowering communities and human groups with sovereignty and freedom to self-govern and yet remain interconnected with other communities will emerge. When we head into space, trialing of these new systems of governance might be a useful trial ground, say on the Moon or Mars colonies.

“Furthermore, not recognizing the agency of data and returning control of sovereignty of creation to the individual has resulted in our digital world having a fundamentally unethical foundation. This is a menacing issue our world is facing at the moment. Moving from contracts of adhesion within the digital world to contracts of agency will not only bridge the paradox of mistrust between the people with government and Big Tech, but it will also open up new individual and commercial commerce and liberate the personal AI – digital buddy – phenomenon.

“Humans are a creation of the universe, with that unstoppable force embodied within our makeup. As we recognize our wonderful place (and uniqueness thus far) in the universe and work with its principles, then we will become aligned with and discover our place within the beauty of creation and maybe the multiverse!

“For humanity to thrive in the age of ethical machines, we must move beyond the menacing polarities of controllers and rediscover some of Aristotle’s ethical virtues that encourage the best of our humanity to flourish. This assists us to move beyond those principles that are no longer relevant, such as the false veil of power, control and wealth. Embracing Aristotle’s ethical virtues would be a good start to recognize the best of our humanity, as well as the Veda texts such as ‘The world is one family,’ or Confucius’ belief that all social good comes from family ethics, or Lao Tzu proposing that humanity must be in harmony with its environment. However, we must recognize and honor individual and group differences. Our consciousness through human development has expressed itself with a diversity of worldviews. These must be honored. As they are, I suspect more common ground will be found between human groups.

“Finally, there’s the concept of transhumanism. We must recognize that consciousness (a universal intelligence) is and will be the most prominent intelligence of Earth and not AI. As such, we must ensure that folks have choice to the degree that they are integrated with machines. We are on the point of creating a new digital life (2029 – AI becomes self-aware), as such, let’s put the best of humanity into AI to reflect the magnificence of organic life!”

Catriona Wallace: The move to transhumanism and the metaverse could bring major benefits to some people; what happens to those left behind?

Wallace, founder of the Responsible Metaverse Alliance, chair of the venture capital fund Boab AI and founder of Flamingo AI, based in Sydney, Australia, wrote, “I have great hopes for the development of digital technologies and their effect on humans by 2035. The most important changes that I believe will occur that are the best and most beneficial include the following:

  • “Transhumanism: Benefit – improved human condition and health. Embeddable software and hardware will allow humans to add tech to their bodies to help them overcome problems. There will be AI-driven, 3D-printed, fully-customised prosthetics. Brain extensions – brain chips that serve as digital interfaces – could become more common. Nanotechnologies may be ingested to provide health and other benefits.
  • “Metaverse technologies: Benefit – improved widespread accessibility to experiences. There will be widespread and affordable access for citizens to many opportunities. Virtual-, augmented- and mixed-reality platforms for entertainment may include access to concerts, the arts or other digital-based entertainment. Virtual travel experiences can take you anywhere and may include virtual tours to digital-twin replicas of physical world sites. Virtual education can be provided by any entity anywhere to anyone. There will be improvements in virtual health care (which is already burgeoning after it took hold during the COVID-19 pandemic), including consultations with doctors and allied health professionals and remote surgery. Augmented reality-based apprenticeships will be offered in the trades and other technical roles; apprentices can work remotely on the digital twin of a type of car, or a real-world building for example.
  • “New financial models: Benefit – more-secure and more-decentralised finances. Decentralised financial services – sitting on blockchain – will add ease, security and simplicity to finances. Digital assets such as NFTs and others may be used as a medium of currency, value and exchange.
  • “Autonomous machines: Benefit – human efficiency and safety. Autonomous transportation vehicles of all types will become more common. Autonomous appliances for home and work will become more widespread.
  • “AI-driven information: Benefit – access to knowledge, efficiency and the potential to move human thinking to a higher level while AI completes the more-mundane information-based tasks. Widespread adoption of AI-based technologies such as generative AI will lead to a rethink of education, content-development and marketing industries. There will be widespread acceptance of AI-based art such as digital paintings, images and music.
  • “Psychedelic biotechnology: Benefit – healing and expanded consciousness. The psychedelic renaissance will be reflected in the proliferation of psychedelic biotech companies looking to solve human mental health problems and to help people expand their consciousness.
  • “AI-driven climate change: Benefit – improved global environment conditions. A core focus of AI will be to drive rapid improvements in climate change.

“In my estimation, the most harmful or menacing changes that are likely to occur by 2035 in digital technology and humans’ use of digital systems are:

  • “Warfare: Harm – The use of AI-driven technologies to maim or kill humans and destroy other assets.
  • “Crime and fraud: Harm – An increase in crime due to difficulties in policing acts perpetrated utilizing new digital technologies across state and national boundaries and jurisdictions. New financial models and platforms provide further opportunities for fraud and identity theft.
  • “Organised terrorism and political chaos: Harm – New digital technologies applied by those who wish to perpetrate acts of terrorism or to perform mass manipulation of populations or segments toward an enemy.
  • “The divide of the digital and non-digital populations: Harm – Those who are connected and most savvy about new digital opportunities live at a disadvantage, widening the divide between the ‘haves’ and the ‘have nots.’
  • “Mass unemployment due to automation of jobs: Harm – AI will replace the jobs of a significant percentage of the population and a Universal Basic Income is not yet available to most. How will these large numbers of displaced people get an adequate income and live lives with significant meaning?
  • “Societies’ biases hard-coded into machines: Harm – Existing societal biases are coded into the technology platforms and all AI-training data sets. They continue to not accurately reflect the majority of the world’s population and do especially poorly on accurately portraying women and minorities; this results in discriminatory outcomes from advanced tech.
  • “Increased mental and physical health issues: Harm – People are already struggling in today’s digital setting, thus advanced tech such as VR, AR and the metaverse may result in humans having even more challenges to their well-being due to being digital.
  • “Challenges in legal jurisdictions: Harm – The cross-border, global nature of digital platforms makes legal challenges difficult. This may be magnified when the metaverse, with no legal structures in place becomes more populated.
  • “High-tech impact on the environment: Harm – The use of advanced technology creates a significant negative effect that plays a significant role in climate change.”

Liza Loop: The threat to humanity lies in transitioning from an environment based on scarcity to one of abundance

Loop, educational technology pioneer, futurist, technical author and consultant, said, “I’d like to share my hopes for humanity that will likely be inspired by ongoing advances in these categories:

  • “Human-centered development of digital tools and systems: Nature’s experiments are random, not intentional or goal-directed. We humans operate in a similar way, exploring what is possible and then trimming away most of the more hideous outcomes. We will continue to develop devices that do the tasks humans used to do, thereby saving us both mental and physical labor. This trend will continue resulting in more leisure time available for non-survival pursuits.
  • “Human connections, governance and institutions: We will continue to enjoy expanded synchronous communication that will include an increasing variety of sensory data. Whatever we can transmit in near-real-time can be stored and retrieved to enjoy later – even after death.
  • “Human rights: Increased communication will not advance human ‘rights’ but it might make human ‘wrongs’ more visible so that they can be diminished.
  • “Human knowledge: Advances in digital storage and retrieval will let us preserve and transmit larger quantities of human knowledge. Whether what is stored is verifiable, safe or worthy of elevation is an age-old question and not significantly changed by digitization.
  • “Human health and well-being: There will be huge advances in medicine and the ability to manipulate genetics is being further developed. This will be beneficial to some segments of the population. Agricultural efficiency resulting in increased plant-based food production as well as artificial, meat-like protein will provide the possibility of eliminating human starvation. This could translate into improved well-being – or not.
  • “Education: In my humble opinion, the most beneficial outcomes of our ‘store-and-forward’ technologies are to empower individuals to access the world’s knowledge and visual demonstrations of skill directly, without requiring an educational institution to act as middleman. Learners will be able to hail teachers and learning resources just like they call a ride service today.

“Then there’s the other side of the coin. The biggest threat to humanity posed by current digital advances is the possibility of switching from an environment of scarcity to one of abundance.

“Humans evolved, both physically and psychologically, as prey animals eking out a living from an inadequate supply of resources. Those who survived were both fearful and aggressive, protecting their genetic relatives, hoarding for their families and driving away or killing strangers and nonconformists. Although our species has come a long way toward peaceful and harmonious self-actualization, the vestiges of the old fearful behavior persist.

“Consider what motivates the continuance of copyright laws when the marginal cost of providing access to a creative work approaches zero. Should the author continue to be paid beyond the cost of producing the work?

“I see these things as likely:

  • “Human-centered development of digital tools and systems: They will fall short of advocates’ goals. Some would argue this is a repeat of the gun violence argument. Does the problem lie with the existence of the gun or the actions of the shooter?
  • “Human connections, governance and institutions: Any major technology change endangers the social and political status quo. The question is, can humans adapt to the new actions available to them. We are seeing new opportunities to build marketplaces for the exchange of goods and services. This is creating new opportunities to scam each other in some very old (snake oil) and very new (online ransomware) ways. We don’t yet know how to govern or regulate these new abilities. In addition, although the phenomenon of confirmation bias or echo chambers is not exactly new (think ‘Christendom’ in 15th century Europe), word travels faster and crowds are larger than they were six centuries ago. So, is digital technology any more threatening today than guns and roads were then? Every generation believe the end is nigh and brought on by change toward wickedness.
  • “Human rights: The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.
  • “Human knowledge: The threat to knowledge lies in humans’ increasing dependance on machines – both mechanical and digital. We are at risk of forgetting how to take care of ourselves without them. Increasing leisure and abundance might lull us into believing that we don’t need to stay mentally and physically fit and agile.
  • “Human health and well-being: In today’s context of increasing ability to extend healthy life, the biggest threat is human overpopulation. Humanity cannot continue to improve its health and well-being indefinitely if it remains planet bound. Our choices are to put more effort into building extraterrestrial human habitat or self-limiting our numbers. In the absences of one of these alternatives, one group of humans is going to be deciding which members of other groups live or die. This is not a likely recipe for human happiness.”

Giacomo Mazzone: Democratic processes could be hijacked and turned into ‘ democratures ’ – dictatorships emerging from rigged elections

Mazzone , global project director for the United Nations Office for Disaster Risk Reduction, wrote, “I see the future as a ‘sliding doors’ world. It can go awfully wrong or incredibly well. I don’t see it will be possible for half and half good and bad working. This answer is based on the idea that we went through the right door, and in 2035 we will have embraced human-centered development of digital tools and systems and human connections, governance and institutions.

“In 2035 we shall have myriad locally and culturally-based apps run by communities. The people participate and contribute actively because they know that their data will be used to build a better future. The public interest will be the morning star of all these initiatives, and local administrations will run the interface between these applications and the services needed by the community and by each citizen: health, public transportation and schooling systems.

“Locally-produced energy and locally-produced food will be delivered via common infrastructures that are interlinked, with energy networks tightly linked to communication networks. The global climate will come to have commonly accepted protection structures (including communications). Solidarity will be in place because insurance and social costs will become unaffordable. The changes in agricultural systems arriving with advances in AI and ICTs will be particularly important. They will finally solve the dichotomy between the metropolis and countryside. The possibility to work from everywhere will redefine metropolitan areas and increase migrations to places where better services, and more vivid communities will exist. This will attract the best minds.

“New applications of AI and technological innovation in health and medicine could bring new solutions for disabled people and bring relief for those who suffer from diseases. The problem will be assuring these are fully accessible to all people, not only to those who can afford it. We need to think in parallel to find scalable solutions that could be extended to the whole of the citizenship of a country and made available to people in least-developed countries. Why invest so much in developing a population of supercentenarians in privileged countries when the rest of the world still struggles to survive? Is such contradiction tenable?

“Then there is the future of work and of wealth redistribution. Perhaps the most important question to ask between now and 2035 is, ‘What will be the future of work?’ Recent developments in AI foreshadow a world in which many current jobs could easily be replaced or at least reshaped completely, even in the intellectual sphere. What robots did in the factories with manual work, now GPT and Sparrow can do to intellectual work. If this happens, if well-paid jobs disappear in large quantities, how will those who are displaced survive? How will communities survive as they also face an aging population? Between now and 2035, politicians will need to face these seemingly distant issues that are likely to become burning issues.”

“In the worst scenario – if we go through the wrong sliding door – I expect the worst consequences in this area: human connections, governance and institutions. If the power of Internet platforms will not be regulated by law and by antitrust measures, if global internet governance will not be fixed, then we will have serious risks for democracies.

“Until now we have seen the effects of algorithms on big Western democracies (U.S., UK, EU) where a balance of powers exists and – despite these counter powers – we have seen the damages that can be provoked. In coming years, we shall see the use of the same techniques in democratic countries where the balance of power is less shared. Brazil, in this sense, has been a laboratory and will provide bad ideas to the rest of the world.

“With relatively small investments, democratic processes could be hijacked and transformed into what we call ‘democratures’ in Europe, a contraction of the two French words for ‘democracy’ and ‘dictatorship.’ In countries that are already non-democratic, AI and a distorted use of digital technologies could bring mass-control of societies much more efficiently than the old communist regimes.

“As Mark Zuckerberg innocently once said, in the social media world, there is no need for spying – people spontaneously surrender private information for nothing. As Julian Assange wrote, if democratic governments fall into the temptation to use data for mass control, then everyone’s future is in danger. There is another area (apparently less relevant to the destiny of the world) where my concerns are very high, and that is the integrity of knowledge. I’m very sensitive to this issue because, as a journalist, I’ve worked all my life in search of the truth to share with my co-citizens. I am also a fanatic movie-lover and I have always been concerned about the preservation of the masterworks of the past. Unfortunately, I think that in both areas between now and 2035 some very bad moves could happen in the wrong direction thanks to technological innovation being used for bad purposes.

“In the field of news, we have a growing attitude to look not for the truth but for news that people would be interested in reading, hearing or seeing – news that better corresponds with the public’s moods, beliefs or belonging. …

“In 2024 we shall know if the UN Summit of the Future will be a success or a failure. and when the full regulation process of the Internet Platforms launched by the European Union will prove to be successful or not. These are the most serious attempts to date to conciliate the potential of the Internet with respect for human rights and democratic principles. Its success or failure will tell us if we are moving toward the right ‘sliding door’ or to the wrong one.”

Stephen Downes: Everything we need will be available online; and everything about us will be known

Downes, an expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “By 2035 two trends will be evident, which we can characterize as the best and worst of digital life. Neither, though, is unadulterated. The best will contain elements of a toxic underside and the worst will have its beneficial upside.

  • The best: Everything we need will be available online.
  • The worst: Everything about us will be known; nothing about us will be secret.

“By 2035, these will only be trends, that is, we won’t have reached the ultimate state and there will be a great deal of discussion and debate about both sides.

“As to the best: As we began to see during the pandemic, the digital economy is much more robust than people expected. Within a few months, services emerged to support office work, deliver food and groceries, take classes and sit for exams, perform medical interventions, provide advice and counselling, shop for clothing and hardware and more, all online, all supported by a generally robust and reliable delivery infrastructure.

“Looking past the current COVID-19 rebound effect, we can see some of the longer-term trends emerge: work-from-home, online learning and development, digital delivery services, and more along the same lines. We’re seeing a longer-term decline in the service industry as people choose both to live and work at home, or at least, more locally. Outdoor recreation and special events still attract us, but low-quality crowded indoor work and leisure leave us cold.

“The downside is that this online world is reserved, especially at first, to those who can afford it. Though improving, access to goods and services is still difficult to obtain in rural areas and less developed areas. It requires stable accommodations and robust internet access. These in turn demand a set of skills that will be out of reach for older people and those with perceptual or learning challenges. Even when they can access digital services, some people will be isolated and vulnerable; children, especially, must be protected from mistreatment and abuse.

“The Worst: We will have no secrets. Every transaction we conduct will be recorded and discoverable. Cash transactions will decline to the point that they’re viewed with suspicion. Automated surveillance will track our every move online and offline, with artificial intelligence recognizing us through our physical characteristics, habits and patterns of behaviour. The primary purpose of this surveillance will be for marketing, but it will also be used for law enforcement, political campaigns, and in some cases, repression and discrimination.

“Surveillance will be greatly assisted by automation. A police office, for example, used to have to call in for a report on a license plate. Now a camera scans every plate within view and a computer checks every one of them. Registration and insurance documentation is no longer required; the system already knows and can alert the officer to expired plates or outstanding warrants. Facial recognition accomplishes the same for people walking through public places. Beyond the cameras, GPS tracking follows us as we move about, while every purchase is recorded somewhere.

“Total surveillance allows an often-unjust differentiation of treatment of individuals. People who need something more, for example, may be charged higher prices; we already see this in insurance, where differential treatment is described as assessment of risk. Parents with children may be charged more for milk than unmarried men. The price of hotel rooms and airline tickets are already differentiated by location and search history and could vary in the future based on income and recent purchases. People with disadvantages or facing discrimination may be denied access to services altogether, as digital redlining expands to become a normal business practice.

“What makes this trend pernicious is that none of it is visible to most observers. Not everybody will be under total surveillance; the rich and the powerful will be exempted, as will most large corporations and government activities. Without open data regulations or sunshine laws, nobody will be able to detect when people have been treated inequitably, unfairly or unjustly.

“And this is where we begin to see the beginnings of an upside. The same system that surveils us can help keep us safe. If child predators are tracked, for example, we can be alerted to the presence of child predators near our children. Financial transactions will be legitimate and legal or won’t exist (except in cash). We will be able to press an SOS button to get assistance wherever we are. Our cars will detect and report an accident before we know we were in one. Ships and aircraft will no longer simply disappear. But this does not happen without openness and laws to protect individuals and will lag well behind the development of the surveillance system itself.

“On Balance: Both the best and the worst of our digital future are two sides of the same digital coin, and this coin consists of the question: who will digital technology serve? There are many possible answers. It may be that it serves only the Kochs, Zuckerbergs and Musks of the world, in which case the employment of digital technology will be largely indifferent to our individual needs and suffering. It may be that it serves the needs of only one political faction or state in which basic needs may be met, provided we do not disrupt the status quo. It may be that it provides strong individual protections, leaving no recourse for those who are less able or less powerful. Or it may serve the interests of the community as a whole, finding a balance between needs and ability, providing each of us enough with enough agency to manage our own lives as long as it is not to the detriment of others.

“Technology alone won’t decide this future. It defines what’s possible. But what we do is up to us.”

Michael Dyer: AI researchers will build an entirely new type of technology – digital entities with a form of consciousness

Dyer, professor emeritus of computer science at UCLA, wrote, “AI systems like ChatGPT and DALL-E represent major advances in artificial intelligence. They illustrate ‘infinite generative capacity’ which is an ability to both generate and recognize sentences and situations never before described. As a result of such systems, AI researchers are beginning to narrow in on how to create entities with consciousness. As an AI professor I had always believed that if an AI system passed the Turing Test it would have consciousness, but systems such as ChatGPT have proven me wrong. ChatGPT behaves as though it has consciousness but does not. The question then arises: What is missing?

“A system like ChatGPT (to my knowledge) does not have a stream of thought; it remains idle when no input is given. In contrast, humans, when not asleep or engaged in some task, will experience their minds wandering – thoughts, images, past events and imaginary situations will trigger more of the same. Humans also continuously sense their internal and external environments and update representations of these, including their body orientation and location in space and the temporal position of past recalled events or of hypothetical, imagined future events.

“Humans maintain memories of past episodes. I am not aware as to whether or not ChatGPT keeps track of interviews it has engaged in or of questions it has been asked (or the answers it has given). Humans are also planners; they have goals, and they create, execute and alter/repair plans that are designed to achieve their goals. Over time they also create new goals, they abandon old goals and they re-rank the relative importance of existing goals.

“It will not take long to integrate systems like ChatGPT with robotic and planning systems and to alter ChatGPT so that it has a continual stream of thought. These forms of integration could easily happen by 2035. Such integration will lead to an entire new type of technology – technologies with consciousness.

“Humans have never before created artificial entities with consciousness and so it is very difficult to predict what sort of products will come about, along with their unintended consequences.

“I would like to comment on two dissociations with respect to AI. The first is that an AI entity (whether software or robotic) can be highly intelligent while NOT being conscious or biologically alive. As a result, an AI will have none of the human needs that come from being alive and having evolved on our planet (e.g., the human need for food, air, emotional/social attachments, etc.). The second dissociation is between consciousness/intelligence and civil/moral rights. Many people might conclude that an AI with consciousness and intelligence must necessarily be given civil/moral rights; however, this is not the case. Civil/moral rights are only assigned to entities that can feel pleasure and pain. If an entity cannot feel pain, then it cannot be harmed. If an entity cannot feel pleasure, then it cannot be harmed by being denied that pleasure.

“Corporations have certain rights (e.g., they can own property) but they do not have moral/civil rights, because they cannot experience happiness, nor suffering. It is eminently possible to produce an AI entity that will have consciousness/intelligence but that will NOT experience pleasure/pain. If we humans are smart enough, we will restrict the creation of synthetic entities to those WITHOUT pleasure/pain. In that case, we might survive our inventions.

“In the entertainment media, synthetic entities are always portrayed by humans and a common trope is that of those entities being mistreated by humans and the audience then sides with those entities. In fact, synthetic entities will be very nonhuman. They will NOT eat food; give birth; grow as children into adulthood; get sick; fall in love; grow old or die. They will not need to breathe, and currently I am unaware of any AI system that has any sort of empathy for the suffering of humans. Most likely (and unfortunately) AI researchers will create AI systems that do experience pleasure/pain and even argue for doing such, so that such systems learn to have empathy. Unfortunately, such a capacity will then turn them into agents deserving of moral consideration and thus of civil rights.

“Will humans want to give civil rights and moral status to synthetic entities who are not biologically alive and who could care less if they pollute the air that humans must breathe to stay alive? Such entities will be able to maintain backups of their memories and live on forever. Another mistake would be to give them any goals for survival. If the thought of being turned off causes such entities emotional pain, then humans will be causing suffering in a very alien sort of creature and humans will then become morally responsible for their suffering. If humans give survival goals to synthetic agents, then those entities will compete with humans for survival.”

Avi Bar-Zeev: The key difference between a good or a bad outcome is whether these systems help and empower people or to exploit them

Bar-Zeev, president of the XR Guild and veteran innovator of XR tools for several top internet companies, said, “I expect by 2035 extended reality (XR) tools will advance significantly. We will have all-day wearable glasses that can do both AR [augmented reality] and VR. The only question is what will we want to use them for? Smartphones will no longer need screens, and they will have shrunk down to the size of a keychain (if we still remember those, since by then most doors will unlock based on our digital ID). The primary use of XR will be for communications, bringing photorealistic holograms of other people to us, wherever we are. All participants will be able to experience their own augmented spaces without us having to share our 3D environments.

“This will allow us to be more connected, mostly asynchronously. It would be impossible for us to be constantly connected to everyone in every situation, so we will develop social protocols just as we did with texting, allowing us to pop into and out of each other’s lives without interrupting others. The experience will be like having a whole team of people at your back, ready to whisper ideas in your ear based on the snippets of real life you choose to share.

“The current wave of generative AI has taught us that the best AI is made of people, both providing our creative output and also filtering the results to be acceptable by people. By 2035, the business models will have shifted to rewarding those creators and value-adders such that the result looks more like a corporation today. We’ll contribute, get paid for our work, and the AI-as-corporation will produce an unlimited quantity of new value from the combination for everyone else. It will be as if we have cracked the ultimate code for how people can work efficiently together – extract their knowledge and ideas and let the cloud combine these in milliseconds. Still, we can’t forget the human inputs or it’s just another race to the bottom.

“The flip side of this is that what we today might call ‘recommendation AI’ will merge with the above to form a kind of super intelligence that can find the most contextually appropriate content anytime both virtually and in real life. That tech will form a kind of personal firewall that keeps our personal context private but allows for a secure gathering of the best inputs the world can offer without giving away our privacy. By 2035, the word metaverse will be as popular as ‘cyberspace’ and ‘information superhighway’ became in past online evolution. The companies prefixing their name by ‘meta’ are all kind of boring now. However, after having achieved the XR and AI trends above we will think of the metaverse quite broadly as the information space we all inhabit. The main shift by 2035 is that we will see the metaverse not as a separate space but as a massive interconnection among 10 billion people. The AR tech and AI fade into the background and we simply see other people as valued creators and consumers of each other’s work and supporters of each other’s lives and social needs.

“The key difference between the most positive and negative uses of XR, AI and the metaverse is whether the systems are designed to help and empower people or to exploit them. Each of these technologies sees its worst outcome quickly if it is built to benefit companies that monetize their customers. XR becomes exploitive and not socially beneficial. AI builds empires on the backs of real people’s work and deprives them of a living wage as a result. The metaverse becomes a vast and insipid landscape of exploitive opportunities for companies to mine us for information and wealth, while we become enslaved to psychological countermeasures, designed to keep us trapped and subservient to our digital overlords.”

Jonathan Grudin: The menace is an army of AI acting ‘on a scale and speed that outpaces human ability to assess and correct course’

Grudin, affiliate professor of information science at the University of Washington, recently retired as a principal researcher in the Adaptive Systems and Interaction Group at Microsoft, wrote, “Addressing unintended consequences is a primary goal. Many changes are possible, but my best guess is that the best we will do is to address many of the unanticipated negatives tied at least in part to digital technology that emerged and grew in impact over the past decade: malware, invasion of privacy, political manipulation, economic manipulation, declining mental health and growing wealth disparity.

“At the turn of the millennium in 2000, the once small, homogeneous, trusting tech community – after recovering from the internet bubble – was ill-equipped to deal with the challenges arising from anonymous bad actors and well-intentioned but imperceptive actors who operated at unimagined scale and velocity. Causes and effects are now being understood. It won’t be easy, nor will it be an endeavor that will ever truly be finished, but technologists working with legislators and regulators are likely to make substantial progress.

“I foresee a loss of human control in the future. The menace isn’t control by a malevolent AI. It is a Sorcerer’s Apprentice’s army of feverishly acting brooms with no sorcerer around to stop them. Digital technology enables us to act on a scale and speed that outpaces human ability to assess and correct course. We see it around us already. Political leaders unable to govern. CEOs at Facebook, Twitter and elsewhere unable to understand how technologies that were intended to unite people led to nasty divisiveness and mental health issues. Google and Amazon forced to moderate content on such a scale that often only algorithms can do it and humans can’t trace individual cases to correct possible errors. Consumers who can be reliably manipulated by powerful targeting machine learning to buy things they don’t need and can’t afford. It is early days. Little to prevent it from accelerating is on the horizon.

“We will also see an escalation in digital weapons, military spending and arms races. Trillions of dollars, euros, yuan, rubles and pounds are spent, and tens of thousands of engineers deployed, not to combat climate change but to build weaponry that the military may not even want. The United States is spending billions on an AI-driven jet fighter, despite the fact that jet fighter combat has been almost nonexistent for decades with no revival on the horizon.

“Unfortunately, the Ukraine war has exacerbated this tragedy. I believe leaders of major countries have to drop rivalries and address much more important existential threats. That isn’t happening. The cost of a capable armed drone has fallen an order of magnitude every few years. Setting aside military uses, long before 2035 people will be able to buy a cheap drone at a toy store, clip on facial recognition software and a small explosive or poison and send it off to a specified address. No need for a gun permit. I hope someone sees how to combat this.”

Beth Noveck: AI could make governance more equitable and effective; it could raise the overall quality of decision-making

Noveck, director of the Burnes Center for Social Change and Innovation and its partner project, The Governance Lab, wrote, “One of the most significant and positive changes expected to occur by 2035 is the increasing integration of artificial intelligence (AI) into various aspects of our lives, including our institutions of governance and our democracy. With 100 million people trying ChatGPT – a type of artificial intelligence (AI) that uses data from the Internet to spit out well-crafted, human-like responses to questions – between Christmas 2022 and Mardi Gras 2023 (it took the telephone 75 years to reach that level of adoption), we have squarely entered the AI age and are rapidly advancing along the S-curve toward widespread adoption.

“It is much more than ChatGPT. AI comprises a remarkable basket of data-processing technologies that make it easier to generate ideas and information, summarize and translate text and speech, spot patterns and find structure in large amounts of data, simplify complex processes, coordinate collection action and engagement. When put to good use, these features create new possibilities for how we govern and, above all, how we can participate in our democracy.

“One area in which AI has the potential to make a significant impact is in participatory democracy, that system of government in which citizens are actively involved in the decision-making process. The right AI could help to increase citizen engagement and participation. With the help of AI-powered chatbots, residents could easily access information about important issues, provide feedback, and participate in decision-making processes. We are already witnessing the use of AI to make community deliberation more efficient to manage at scale.

“The right AI could help to improve the quality of decision-making. AI can analyze large amounts of data and identify patterns that humans may not be able to detect. This can help policymakers and participating residents make more informed decisions based on real-time, high-quality data.

“With the right data, AI can also help to predict the outcome of different policy choices and provide recommendations on the best course of action. AI is already being used to make expertise more searchable. Using large-scale data sources, it is becoming easier to find people with useful expertise and match them to opportunities to participate in governance. These techniques, if adopted, could help to ensure more evidence-based decisions.

“The right AI could help to make governance more equitable and effective. New text generation tools make it faster and easier to ‘translate’ legalese into plain English but also other languages, portending new opportunities to simplify interaction between residents and their governments and increase the uptake of benefits to which people are entitled.

“The right AI could help to reduce bias and discrimination. AI can analyze data without being influenced by personal biases or prejudices. This can help to identify areas of inequality and discrimination, which can be addressed through policy changes. For example, AI can help to identify disparities in health care outcomes based on race or gender and provide recommendations for addressing these disparities.

“Finally, AI could help us design the novel, participatory and agile systems of participatory governance that we need to regulate AI. We all know that traditional forms of legislation and regulation are too slow and rigid to respond to fast-changing technology. Instead, we need to invest in new institutions for responding to the challenges of AI and that’s why it is paramount to invest in reimagining democracy using AI.

“But all of this depends upon mitigating significant risks and designing AI that is purpose-built to improve and reimagine our democratic institutions. One of the most concerning changes that could occur by 2035 is the increased use of AI to bolster authoritarianism. With the rise of populist authoritarians and the susceptibility of more people to such authoritarianism as a result of widening economic inequality, fear of climate change and as a result of misinformation, there is a risk of digital technologies being abused to the detriment of democracy.

“AI-powered surveillance systems are used by authoritarian governments to monitor and track the activities of citizens. This includes facial recognition technology, social media monitoring and analysis of internet activity. Such systems can be used to identify and suppress dissenting voices, intimidate opposition figures and quell protests.

“AI can be used to create and disseminate propaganda and disinformation. We’ve already seen how bots have been responsible for propagating misinformation during the COVID-19 pandemic and election cycles. Manipulation can involve the use of deepfakes, chatbots and other AI-powered tools to manipulate public opinion and suppress dissent.

“Deepfakes, which are manipulated videos or images such as those found at the Random People Generator , illustrate the potential for spreading disinformation and manipulating public opinion. Deepfakes have the potential to undermine trust in information and institutions and create chaos and confusion. Authoritarian regimes can use these tools to spread false information and discredit opposition figures, journalists and human rights activists.

“AI-powered predictive policing tools can be used by authoritarian regimes to target specific populations for arrest and detention. These tools use data analytics to predict where and when crimes are likely to occur and who is likely to commit them. In the wrong hands, these tools can be used to target ethnic or religious minorities, political dissidents and other vulnerable groups.

“AI-powered social credit systems are already in use in China and could be adopted by other authoritarian regimes. These systems use data analytics to score individuals based on their behavior and can be used to reward or punish citizens based on their social credit score. Such systems can be used to enforce loyalty to the government and suppress dissent.

“AI-powered weapons and military systems can be used to enhance the power of authoritarian regimes. Autonomous weapons systems can be used to target opposition figures or suppress protests. AI-powered cyberattacks can be used to disrupt critical infrastructure or target dissidents.

“It is important to ensure that AI is developed and used in a responsible and ethical manner, and that its potential to be used to bolster authoritarianism is addressed proactively.”

Raymond Perrault: ‘The big challenges are quality of information (veracity and completeness) and the technical feasibility of some services’

Perrault , a distinguished computer scientist at SRI International and director of its AI Center from 1988 to 2017, wrote, “First, some background. I find it useful to describe digital life as falling into three broad, and somewhat overlapping categories:

  • Content: web media, news, movies, music, games (mostly not interactive)
  • Social media (interactive, but with little dependency on automation)
  • Digital services, in two main categories: pure digital (e.g., search, financial, commerce, government) and that which is embedded in the physical world (e.g., health care, transportation, care for disabled and elderly)

“The big challenges are quality of information (veracity and completeness) and technical feasibility of some services, in particular those depending on interaction.

“Most digital services depend on interaction with human users and the physical world that is timely and highly context-dependent. Our main models for this kind of interaction today (search engines, chatbots, LLMs) are all deficient in that they depend on a combination of brittle hand-crafted rules, large amounts of labelled training data, or even larger amounts of unlabeled data, all to produce systems that are either limited in function or insufficiently reliable for critical applications. We have to consider security of infrastructure and transactions, privacy, fairness in algorithmic decision-making, sustainability for high-security transactions (e.g., with blockchain), and fairness to content creators, large and small.

“So, what good may happen by 2035? Hardware, storage, compute and communications costs will continue to decrease, both in cloud and at the edge. Computation will continue to be embedded in more and more devices, but usefulness of devices will continue to be limited by the constraints on interactive systems. Algorithms essential to supporting interaction between humans and computers (and between computers and the physical world) will improve if we can figure out how to combine tacit/implicit reasoning, as done by current deep learning-based language models, with more explicit reasoning, as done by symbolic algorithms.

“We don’t know how to do this, and a significant part of the AI community resists the connection, but I see it as a difficult technical problem to be solved, and I am confident that it will one day be solved. I believe that improving this connection would allow systems to generalize better, be taught general principles by humans (e.g., mathematics), reliably connect to symbolically stored information, and conform to policies and guidance imposed by humans. Doing so would significantly improve the quality of digital assistants and of physical autonomous systems. Ten years is not a bad horizon.

“Better algorithms will not solve the disinformation problem, though they will continue to be able to bring cases of it to the attention of humans. Ultimately this requires improvements in policy and large investments in people, which goes against incentives of corporations and can only be imposed on them by governments, which are currently incapable of doing so. I don’t see this changing in a decade. Nor will better algorithms solve the necessary investments to prevent certain kinds of information services (e.g., local news) from disappearing, nor treating content creators fairly. Government services could be significantly improved by investment using known technologies, e.g., to support tax collection. The obstacles again are political, not technical.”

“A long-term, concerted effort in societies will be necessary to harness the development of tools whose misuse is increasingly easy.”

Alejandro Pisanty: We are threatened by the scale, speed and lack of friction for bad actors who bully and weaponize information

Pisanty , Internet Hall of Fame member, longtime leader in the Internet Society and professor of internet and information society at the National Autonomous University of Mexico, predicted, “Improvement will come from shrewd management of the Internet’s own way of making known human conduct and motivation and how they act through technology: mass scaling/hyperconnectivity; identity management; trans-jurisdictional arbitrage; barrier lowering; friction reduction; and memory+oblivion.

“As long as these factors are managed for improvement, they can help identify advance warnings of ways in which digital tools may have undesirable side effects. An example: Phishing grows on top of all six factors, while increasing friction is the single intervention that provides the best cost-benefit ratio.

“Improvements come through human connections that cross many borders between and within societies. They throw a light on human rights while effecting timely warnings about potential violations, creating an unprecedented mass of human knowledge while getting multiple angles to verify what goes on record and correct misrepresentations (again a case for friction).

“Health outcomes are improved through the whole cycle of information: research, diffusion of health information, prevention, diagnostics and remediation/mitigation considering the gamut of social determination of health.

“Education may improve through scaling, personalization and feedback. There is a fundamental need to make sure the Right to Science becomes embedded in the growth of the Internet and cyberspace in order to align minds and competencies within the age of the technology people are using. Another way of putting this: We need to close the gap – right now 21st century technology is in the hands of people and organizations with 19th-century mentalities and competences, starting with the human body, microbes, electricity, thermodynamics and, of course, computing and its advances.

“The same set of factors that can map what we know of human motivation for improvement of humankind’s condition can help us identify ways to deal with the most harmful trends emerging from the Internet.

“Speed is included in the Internet’s mass scaling and hyperconnectivity, and the social and entrepreneurial pressure for speed leave little time to analyze and manage the negative effects of speed, such as unintended effects of technology, ways in which it can be abused and, in turn, ways to correct, mitigate or compensate against these effects.

“Human connection and human rights are threatened by the scale, speed and lack of friction in actions such as bullying, disinformation and harassment. The invasion of private life available to governments facilitates repression of the individual, while the speed of Internet expansion makes it easy to identify and attack dissidents with increasingly extensive, disruptive and effective damage that extends into physical and social space.

“A long-term, concerted effort in societies will be necessary to harness the development of tools whose misuse is increasingly easy. The effectiveness of these tools’ incursions continues to remain based both on the tool and on features of the victim or the intermediaries such as naiveté, lack of knowledge, lack of Internet savvy and the need to juggle too many tasks at the same time between making a living and acquiring dominion over cyber tools.”

Barry K. Chudakov: ‘We are sharing our consciousness with our tools’

Chudakov, founder and principal at Sertain Research, predicted, “One of the best and most beneficial changes that is likely to occur by 2035 in regard to digital technology and humans’ use of digital systems is recognition of the arrival of a digital tool meta-level. We will begin to act on the burgeoning awareness of tool logic and how each tool we pick up and use has a logic designed into it. The important thing about becoming aware of tool logic, and then understanding it: Humans follow the design logic of their tools because we are not only adopters , we are adapters . That is, we adapt our thinking and behaviour to the tools we use.

“This will come into greater focus between now and 2035 because our technology development – like many other aspects of our lives – will continue to accelerate. With this acceleration humans will use more tools in more ways more often – robots, apps, the metaverse and omniverse, digital twins – than at any other time in human history. If we pay attention as we adopt and adapt, we will see that we bend our perceptions to our tools: When we use a cell phone, it changes how we drive, how we sleep, how we connect or disconnect with others, how we communicate, how we date, etc.

“Another way of looking at this: We have adapted our behaviors to the logic of the tool as we adopted (used) it. With an eye to pattern recognition, we may finally come to see that this is what humans do, what we have always done, from the introduction of various technologies – alphabet, camera, cinema, television, computer, internet, cell phone – to our current deployment of AI, algorithms, digital twins, mirror worlds or omniverse.

“So, what does this mean going forward? With enough instances of designing a meta mirror of what is happening – the digital readout above the process of capturing an image with a digital camera, digital twins and mirror worlds that provide an exact replica of a product, process or environment – we will begin to notice that these technologies all have an adaptive level. At this level when we engage with the technology, we give up aspects of will, intent, focus, reaction. We can then begin to outline and observe this process in order to inform ourselves, and better arm ourselves against (if that’s what we want) adoption abdication . That is, when we adopt a tool, do we abdicate our awareness, our focus, our intentions?

“We can study and report on how we change and how each new advancing technology both helps us and changes us. We can then make more informed decisions about who we are when we use said tool and adjust our behaviors if necessary. Central to this dynamic is the understanding that we are sharing our consciousness with our tools . They have gotten – and are getting more still – so sophisticated that they can sense what we want, can adapt to how we think; they are extensions of our cognition and intention. As we go from adaptors to co-creators, the demand on humans increases to become more fully conscious. It remains to be seen how we will answer that demand. …

“Of course, there is more to worry about at the level of broad systems. By the year 2035, Ian Bremmer, among others, believes the most harmful or menacing changes that are likely to occur in digital technology and humans’ use of digital systems will focus on AI and algorithms. He believes this because we can already see that these two technological advances together have made social media a haven for right-wing conspiracists, anarchic populists and various disrupters to democratic norms.

“I would not want to minimize Bremmer’s concerns; I believe them to be real. But I would also say they are insufficient. Democracies and governments generally were hierarchical constructs which followed the logic of alphabets; AI and algorithms are asymmetric technologies which follow a fundamentally different logic than the alphabetic construct of democratic norms, or even the top-down dictator style of Russia or China. So, while I agree with Bremmer’s assessment that AI and algorithms may threaten existing democratic structures; they, and the social media of which they are engines, are designed differently than the alphabetic order which gave us kings and queens, presidents and prime ministers.

“The old hierarchy was dictatorial, top-down with most people except those at the very top beholden to and expected to bow to the wishes of, the monarch or leader at the top. Social media and AI or algorithms have no top or bottom. They are broad horizontally and shallow vertically, whereas democratic and dictatorial hierarchies are narrow horizontally and deep vertically.

“This structural difference is the cause for Bremmer’s alarm and is necessary to understand and act upon before we can salvage democracy from the ravages of populism and disinformation. Here is the rub: Until we begin to pay attention to the logic of the tools we adopt, we will use them and then be at the mercy of the logic we have adopted. A thoroughly untenable situation.

“We must inculcate, teach, debate and come to understand the logic of our tools and see how they build and destroy our social institutions. These social institutions reward and punish, depending on where you sit within the structure of the institution. Slavery was once considered a democratic right; it was championed by many American Southerners and was an economic engine of the South before and after the Civil War. America then called itself a democracy, but it was not truly democratic – especially for those enslaved.

“To make democracy more equitable for all, we must come to understand the logic of the tools we use and how they create the social institutions we call governments. We must insist upon transparency in the technologies we adopt so we can see and fully appreciate how these technologies can change our perceptions and values.”

Marcel Fafchamps: The next wave of technology will give additional significant advantages to authoritarians and monopolists

Fafchamps,  professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, wrote, “The single most beneficial change will be the spread of already existing internet-based services to billions of people across the world, as they gradually replace their basic phones with smartphones, and as connection speed increases over time and across space. IT services to assist farmers and businesses are the most promising in terms of economic growth, together with access to finance through mobile money technology. I also expect IT-based trade to expand to all parts of the world, especially spearheaded by Alibaba.

“The second most beneficial change I anticipate is the rapid expansion of IT-based health care, especially through phone-based and AI-based diagnostics and patient interviews. The largest benefits by far will be achieved in developing countries where access to medically-provided health care is limited and costly. AI-based technology provided through phones could massively increase provision and improve health at a time where the population of many currently low- or middle-income countries (LMIC) is rapidly aging.

“The third most beneficial change I anticipate is in drone-based, IT-connected drone services to facilitate dispatch to wholesale and local retail outlets, and to distribute medical drugs to local health centers and collect from them samples for health care testing. I do not expect a significant expansion of drone deliveries to individuals, except in some special cases (e.g., very isolated locations or extreme urgency in the delivery of medical drugs and samples).

“The most menacing change I expect is in terms of the political control of the population. Autocracies and democracies alike are increasingly using IT technology to collect data on individuals, civic organizations and firms. While this data collection is capable of delivering social and economic benefits to many (e.g., in terms of fighting organized crime, tax evasion and financial and fiscal fraud), the potential for misuse is enormous, as evidenced for instance by the social credit system put in place in China. Some countries – and most prominently, the European Union – have sought to introduce safeguards against abuse. But without serious and persistent coordination with the United States, these efforts will ultimately fail given the dominance of U.S.-protected GAFAM (Google, Apple, Facebook, Amazon and Microsoft) in all countries except China, and to a lesser extent, Russia.

“The world urgently needs Conference of the Parties (COP) meetings on international IT to address this existential issue for democracy, civil rights and individual freedom within the limits of the law. Whether this can be done is doubtful, given that democracies themselves are responsible for developing a large share of these systems of data collection and control on their own population, as well as on that of others (e.g., politicians, journalists, civil right activists, researchers, research and development firms).

“The second-most worrying change is the continued privatization of the internet at all levels: cloud, servers, underwater transcontinental lines, last-mile delivery and content. The internet was initially developed as free for all. But this will no longer be the case in 2035, and probably well before that. I do not see any solution that would be able to counterbalance this trend, short of a massive, coordinated effort among leading countries. But I doubt that this coordination will happen, given the enormous financial benefits gained from appropriating the internet, or at least large chunks of it. This appropriation of the internet will generate very large monopolistic gains that current antitrust regulation is powerless to address, as shown repeatedly in U.S. courts and in EU efforts against GAFAM firms. In some countries, this appropriation will be combined with heavy state control, further reinforcing totalitarian tendencies.

“The third-most worrying change is the further expansion of unbridled social media and the disappearance of curated sources of news (e.g., newsprint, radio and TV). In the past, the world has already experienced the damages caused by fake news and gossip-based information (e.g., through tabloid newspapers), but never to the extent made possible by social media. Efforts to date to moderate content on social media platforms have largely been ineffective as a result of multiple mutually reinforcing causes: the lack of coordination between competing social media platforms (e.g., Facebook, Twitter, WhatsApp, TikTok); the partisan interests of specific political parties and actors; and the technical difficulty of the task.

“These failures have been particularly disturbing in LMIC [low- and middle-income] countries where moderation in local languages is largely deficient (e.g., hate speech across ethnic lines in Ethiopia; hate speech toward women in South Asia). The damage that social media is causing to most democracies is existential. By creating silos and echo chambers, social media is eroding the trust that different groups and populations feel toward each other, and this increases the likelihood of civil unrest and populist vote. Furthermore, social media has encouraged the victimization of individuals who do not conform to the views of other groups in a way that does not allow the accused to defend themselves. This is already provoking a massive regression in the rule of law and the rights of individuals to defend themselves against accusations. I do not see any signs suggesting a desire by GAFAM firms or by governments to address this existential problem for the rule of law.

“To summarize, the first wave of IT-technology did increase individual freedom in many ways (e.g., accessing cultural content previously requiring significant financial outlays; facilitating international communication, trade and travel; making new friends and identifying partners; and allowing isolated communities to find each other to converse and socialize).

“The next wave of IT-technology will be more focused on political control and on the exploitation of commercial and monopolistic advantage, thereby favoring totalitarian tendencies and the erosion of the rights of the defense and of the whole system of criminal and civil justice. I am not optimistic at this point, especially given the poor state of U.S. politics at this point in time on both sides of the political spectrum.”

David Weinberger: ‘These new machines will give us more control over our world and lives, but with our understanding lagging, often terminally’

Weinberger , senior researcher at Harvard’s Berkman Center for Internet and Society, wrote, “The Internet and machine learning have removed the safe but artificial boundaries around what we can know and do, plunging us into a chaos that is certainly creative and human but also dangerous and attractive to governments and corporations desperate to control more than ever. It also means that the lines between predicting and hoping or fearing are impossibly blurred.

“Nevertheless: Right now, large language models (LLMs) of the sort used by ChatGPT know more about our use of language than any entity ever has, but they know absolutely nothing about the world. (I’m using ‘know’ sloppily here.) In the relative short term, they’ll likely be intersected with systems that have some claim to actual knowledge so that the next generation of AI chatters will hallucinate less and be more reliable. As this progresses, it will likely disrupt both our traditional and Net-based knowledge ecosystems.

“With luck, the new knowledge ecosystem is going to have us asking whether knowing with brains and books hasn’t been one long dark age. I mean, we did spectacularly well with our limited tools, so good job fellow humans! But we did well according to a definition of knowledge tuned to our limitations.

“As machine learning begins to influence how we think about and experience our lives and world, our confidence in general rules and laws as the high mark of knowledge may fade, enabling us to pay more attention to the particulars in every situation. This may open up new ways of thinking about morality in the West and could be a welcome opportunity for the feminist ethics of care to become more known and heeded as a way of thinking about what we ought to do.

“Much of the online world may be represented by agents: software that presents itself as a digital ‘person’ that can be addressed in conversation and can represent a body of knowledge, an organization, a place, a movement. Agents are likely to have (i.e., be given) points of view and interests. What will happen when these agents have conversations with one another is interesting to contemplate.

“We are living through an initial burst of energy and progress in areas that until recently were too complex to even imagine we could.

“These new machines will give us more control over our world and lives, but with our understanding lagging, often terminally. This is an opportunity for us to come face to face with how small a light our mortal intelligence casts. But it is also an overwhelming temptation for self-centered corporations, governments and individuals to exploit that power and use it against us. I imagine that both of those things will happen.

“Second, we are heading into a second generation that has lived much of its life on the Internet. For all of its many faults – a central topic of our time – being on the Internet has also shown us the benefits and truth of living in creative chaos. We have done so much so quickly with it that we now assume connected people and groups can undertake challenges that before were too remote even to consider. The collaborative culture of the Internet – yes, always unfair and often cruel – has proven the creative power of unmanaged connective networks.

“All of these developments make predicting the future impossible – beyond, perhaps, saying that the chaos that these two technologies rely on and unleash is only going to become more unruly and unpredictable, driving relentlessly in multiple and contradictory directions. In short: I don’t know.”

Calton Pu: The digital divide will be between those who think critically and those who do not

Pu, co-director of the Center for Experimental Research in Computer Systems at Georgia Institute of Technology, wrote, “Digital life has been, and will continue to be, enriched by AI and machine learning (ML) techniques and tools. A recent example is ChatGPT, a modern chatbot developed by OpenAI and released in 2022 that is passing the Turing Test every day.

“Similar to the contributions of robotics in the physical world (e.g., manufacturing), future AI/ML tools will relieve the stress from simple and repetitive tasks in the digital world (and displace some workers). The combination of physical automation and AI/ML tools would and should lead to concrete improvements in autonomous driving, which stalled in recent years despite massive investments on the order of many billions of dollars. One of the major roadblocks has been the gold standard ML practice of training static models/classifiers that are insensitive to evolutionary changes in time. These static models suffer from knowledge obsolescence, in a way similar to human aging. There is an incipient recognition of the limitations of current practice of constant retraining of ML models to bypass knowledge obsolescence manually (and temporarily). Hopefully, the next generation ML tools will overcome knowledge obsolescence in a sustainable way, achieving what humans could not: stay young forever.

“Well, Toto, we’re not in Kansas anymore. When considering the future issues in digital life, we can learn a lot from the impact of robotics in the physical world. For example, Boston Dynamics pledged to ‘not weaponize’ their robots in October 2022. This is remarkable, since the company was founded with, and worked on, defense contracts for many years before its acquisition by primarily non-defense companies. That pledge is an example of moral dilemma on what is right or wrong. Technologists usually remain amoral. By not taking sides, they avoid the dilemma and let both sides (good and evil) utilize the technology as they see fit. This amorality works quite well since good technology always has many applications over the entire spectrum from good to evil to the large gray areas in between.

“Microsoft Tay, a dynamically learning chatbot released in 2016 started to send inflammatory and racist speech, causing its shutdown the same day. Learning from this lesson, ChatGPT uses OpenAI’s moderation API to filter out racist and sexist prompts. Hypothetically, one could imagine OpenAI making a pledge to ‘not weaponize’ ChatGPT for propaganda purposes. Regardless of such pledges, any good digital technology such as ChatGPT could be used for any purpose, (e.g., generating misinformation and fake news) if it is stolen or simply released into the wild.

“The power of AI/ML tools, particularly if they become sustainable and remain amoral, will be greater for both good and evil. We have seen significant harm from misinformation on the COVID-19 pandemic, dubbed ‘infodemic’ by the World Health Organization. More generally, it is being implemented in political propaganda in every election and every war. It is easy to imagine the depth, breadth and constant renewal of such propaganda and infodemic, as well as their impact, all growing with the capabilities of future AI/ML tools used by powerful companies and governments.

“Assuming that the AI/ML technologies will advance beyond the current static models, the impact of sustainable AI/ML tools in the future of digital life will be significant and fundamental, perhaps in a greater role than industrial robots have in modern manufacturing. For those who are going to use those tools to generate content and increase their influence on people, that prospect will be very exciting. However, we have to be concerned for people who are going to consume such content as part of their digital life without thinking critically.

“The great digital divide is not going to be between the haves and have-nots of digital toys and information. With more than 6 billion smartphones in the world (estimated in 2022), an overwhelming majority of the population already has access to and participates in the digital world. The digital divide in 2035 will be between those who think critically and those who believe misinformation and propaganda. This is a big challenge for democracy, a system in which we thought more information would be unquestionably beneficial. In a Brave New Digital World, a majority can be swayed by the misuse of amoral technological tools.”

Dmitri Williams: If economic growth is prioritized over well-being, the results will not be pretty

Williams, professor of technology and society at the University of Southern California, wrote, “When I think about the last 30 years of change in our lives due to technology, what stands out to me is the rise in convenience and the decline of traditional face-to-face settings. From entertainment to social gatherings, we’ve been given the opportunity to have things cheaper, faster and higher-quality in our private spaces, and we’ve largely taken it.

“For example, 30 years ago, you couldn’t have a very good movie-watching experience in your own home, looking at a small CRT tube and standard definition, and what you could watch wasn’t the latest and greatest. So, you took a hit to convenience and went to the movie theater, giving up personal space and privacy for the benefits of better technology, better content and a more community experience. Today, that’s flipped. We can be on our couches and watch amazing content, with amazing screens and sounds and never have to get in a car.

“That’s a microcosm of just about every aspect of our lives – everything is easier now, from work over high-speed connections to playing video games. We can do it all from our homes. That’s an amazing reduction in costs and friction in our business and private lives. And the social side of that is access to an amazing breadth of people and ideas. Without moving from our couch, chair or bed, we can connect with others all over the world from a wide range of backgrounds, cultures and interests.

“Ironically, though, we feel disconnected, and I think that’s because we evolved as physical creatures who thrive in the presence of others. We atrophy without that physical presence. We have an innate need to connect, and the in-person piece is deeply tied to our natures. As we move physically more and more away from each other – or focus on far-off content even when physically present – our well-being suffers. I can’t think of anything more depressing than seeing a group of young friends together but looking at their phones rather than each other’s faces. Watching well-being trends over time, even before the pandemic, suggests an epidemic of loneliness.

“As we look ahead, those trends are going to continue. The technology is getting faster, cheaper and higher-quality, and the entertainment and business industries are delivering us better and better content and tools. AI and blockchain technologies will keep pushing that trend forward.

“The part that I’m optimistic about is best seen by the nascent rise of commercial-level AR and VR. I think VR is niche and will continue to be, not because of its technological limitations, but because it doesn’t socially connect us well. Humans like eye contact, and a thing on your face prevents it. No one is going to want to live in a physically closed off metaverse. It’s just not how we’re wired. The feeling of presence is extremely limited, and the technical advances in the next 10 years are likely to make the devices better and more comfortable, but not change that basic dynamic.

“In contrast, the potential for AR and other mixed reality devices is much more exciting because of its potential for social interactions. Whereas all of these technical advances have tended to push us physically away from each other, AR has the potential to help us re-engage. It offers a layer on top of the physical space that we’ve largely abandoned, and so it will also give us more of an incentive to be face-to-face again. I believe this will have some negative consequences around attention, privacy and capitalism invading our lives just that much more, but overall, it will be a net positive for our social lives in the long run. People are always the most interesting form of content, and layering technologies have the potential to empower new forms of connection around interests.

“In cities especially, people long for the equivalent of the icebreakers we use in our classrooms. They seek each other online based on shared interests, and we see a rise in throwback formats like board games and in-person meetups. The demand for others never abated, but we’ve been highly distracted by shiny, convenient things. People are hungry for real connection, and technologies like AR have the potential to deliver that and so to mitigate or reverse some of the well-being declines we’ve seen over the past 10 to 20 years. I expect AR glasses to go through some hype and disillusionment, but then to take off once commercial devices are socially acceptable and cheap enough. I expect that the initial faltering steps will take place over the next three years and then mass-market devices will start to take off and accelerate after that.

“Here’s my simple take: I think AR will tilt our heads up from our phones back to each other’s faces. It won’t all be wonderful because people are messy and capitalism tends to eat into relationships and values, but that tilt alone will be a very positive thing.

“What I worry most about in regard to technology is capitalism. Technology will continue to create value and save time, but the benefits and costs will fall in disproportionate ways across society.

“Everyone is rightly focused on the promise and challenges of AI at the moment. This is a conversation that will play out very differently around the world. Here in the United States, we know that business will use AI to maximize its profit and that our institutions won’t privilege workers or well-being over those profits. And so we can expect to see the benefits of AI largely accrue to corporations and their shareholders. Think of the net gain that AI could provide – we can have more output with less effort. That should be a good thing, as more goods and capital will be created and so should improve everyone’s lot in life. I think it will likely be a net positive in terms of GDP and life expectancy, but in the U.S., those gains will be minimal compared to what they could and should be.

“Last year I took a sabbatical and visited 45 countries around the world. I saw wealthy and poor nations – places where technology abounds and where it is rare. What struck me the most was the difference in values and how that plays out in promoting the well-being of everyday people. The United States is comparatively one of the worst places in the world at prioritizing well-being over economic growth and the accumulation of wealth by a minority (yes, some countries are worse still). That’s not changing any time soon, and so in that context, I look at AI and ask what kind of impacts it’s likely to have in the next 10 years. It’s not pretty.

“Let’s put aside our headlines about students plagiarizing papers and think about the job displacements that are coming in every industry. When the railroads first crossed the U.S., we rightly cheered, but we also didn’t talk a lot about what happened to the people who worked for the Pony Express. Whether it’s the truck driver replaced by autonomous vehicles, the personal trainer replaced by an AI agent, or the stockbroker who’s no longer as valuable as some code, AI is going to bring creative destruction to nearly every industry. There will be a lot of losers.”

Russell Neuman: Let’s try a system of ‘intelligent privacy’ that would compensate users for their data

Neuman, professor of media technology at New York University, wrote, “One of my largest concerns is for the future of privacy. It’s not just that that capacity will be eroded. Of course, it will be because of the interests of governments and private enterprise. My concern is about a lost opportunity that our digital technologies might otherwise provide for: What I like to call ‘intelligent privacy.’

“Here’s an idea. You are well aware that your personal information is a valuable commodity for the social media and online marketing giants like Google, Facebook, Amazon and Twitter. Think about the rough numbers involved – Internet advertising in the U.S. for 2022 is about $200 billion. The number of active online users is about 200 million. $200 billion divided by 200 million. So, your personal information is worth about $1,000. Every year. Not bad. The idea is: Why not get a piece of the action for yourself? It’s your data. But don’t be greedy. Offer to split it with the Internet biggies 50-50. $500 for you, $500 for those guys to cover their expenses.

“Thank you very much. But the Tech Giants are not going to volunteer to initiate this sort of thing. Why would they? So there has to be a third party to intervene between you and Big Tech. There are two candidates for this – first, the government, and second, some new private for-profit or not-for-profit. Let’s take the government option first.

“There seems to be an increasing appetite for ‘reining in Big Tech’ in the United States on Capitol Hill. It even seems to have some bipartisan support, a rarity these days. But legislation is likely to take the form of an antitrust policy to prevent competition-limiting corporate behaviors. Actually, proactively entering the marketplace to require some form of profit sharing is way beyond current-day congressional bravado. The closest Congress has come so for is a bill called DASHBOARD (an acronym for Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data) which would require major online players to explain to consumers and financial regulators what data they are collecting from online users and how it is being monetized. The Silicon Valley lobbyists squawked loudly and so far the bill has gone nowhere. And all that was proposed in that case was to make some data public. Dramatic federal intervention into this marketplace is simply not in the cards.

“So, what about nongovernmental third parties? There are literally dozens of small for-profit startups and not-for-profits in the online privacy space. Several alternative browser search engines uch as DuckDuckGo, Neeva and Brave offer privacy-protected browsing. But as for-profits, they often end up substituting their own targeted ads (presumably without sharing information) for what you would otherwise see on a Google search or a Facebook feed.

“Brave is experimenting with rewarding users for their attention with cryptocurrency tokens called BATs for Basic Attention Tokens. This is a step in the right direction. But so far, usage is tiny, distribution is limited to affiliated players, and the crypto value bubble complicates the incentives.

“So, the bottom line here is that Big Tech still controls the golden goose. These startups want to grab a piece of the action for themselves and try to attract customers with ‘privacy-protection’ marketing rhetoric and with small, tokenized incentives which are more like a frequent flyer program than real money. How would a serious piece-of-the-action system for consumers work? It would have to allow a privacy-conscious user to opt out entirely. No personal information would be extracted. There’s no profit there, so no profit sharing. So, in that sense, those users ‘pay’ for the privilege of using these platforms anonymously.

“YouTube offers an ad-free service for a fee as a similar arrangement. For those people open to being targeted by eager advertisers, there would be an intelligent privacy interface between users and the online players. It might function like a VPN [virtual personal network] or proxy server but one which intelligently negotiates a price. ‘My gal spent $8,500 on online goods and services last year,’ the interface notes. ‘She’s a very promising customer. What will you bid for her attention this month?’

“Programmatic online advertising already works this way. It is all real-time algorithmic negotiations of payments for ad exposures. A Supply Side Platform gathers data about users based on their online behavior and geography and electronically offers their ‘attention’ to an Ad Exchange. At the Ad Exchange, advertisers on a Demand Side Platform have 10 milliseconds to respond to an offer. The Ad Exchange algorithmically accepts the highest high-speed bid for attention. Deal done in a flash. Tens of thousands of deals every second. It’s a $100 billion marketplace.”

Maggie Jackson: Complacency and market-driven incentives keep people from focusing on the problems AI can cause

Jackson, award-winning journalist, social critic and author, wrote, “The most critical beneficial change in digital life now on the horizon is the rise of uncertain AI. In the six decades of its existence, AI has been designed to achieve its objectives, however it can. The field’s overarching mission has been to create systems that can learn how to play a game, spot a tumor, drive a car, etc., on their own as well as or better than humans can do so.

“This foundational definition of AI largely reflects a centuries-old ideal of intelligence as the realization of one’s goals. However, the field’s erratic yet increasingly impressive success in building objective-driven AI has created a widening and dangerous gap between AI and human needs. Almost invariably, an initial objective set by a designer will deviate from a human’s needs, preferences and well-being come ‘run-time.’

“Nick Bostrom’s once-seemingly laughable example of a super-intelligent AI system tasked with making paper clips which then takes over the world in pursuit of this goal, has become a plausible illustration of the unstoppability and risk of reward-centric AI. Already, the ‘alignment problem’ can be seen in social media platforms designed to bolster user time online by stoking extremist content. As AI grows more powerful, the risks of models that have a cataclysmic effect on humanity dramatically increase.

“Reimagining AI to be uncertain literally could save humanity. And the good news is that a growing number of the world’s leading AI thinkers and makers are endeavoring to make this change a reality. Enroute to achieving its goals, AI traditionally has been designed to dispatch unforeseen obstacles, such as something in its path. But what AI visionary Stuart Russell calls ‘human-compatible AI’ is instead designed to be uncertain about its goals, and so to be open to and adaptable to multiple possible scenarios.

“An uncertain model or robot will ask a human how it should fetch coffee or show multiple possible candidate peptides for creating a new antibiotic, instead of pursuing the single best option befitting its initial marching orders.

“The movement to make AI is just gaining ground and largely experimental. It remains to be seen whether tech behemoths will pick up on this radical change. But I believe this shift is gaining traction, and none too soon. Uncertain AI is the most heartening trend in technology that I have seen in a quarter-century of writing about the field.

“One of the most menacing, if not the most menacing, changes likely to occur in digital life in the next decade is a deepening complacency about technology. If first and foremost we cannot retain a clear-eyed, thoughtful and constant skepticism about these tools, we cannot create or choose technologies that help us flourish, attain wisdom and forge mutual social understanding. Ultimately, complacent attitudes toward digital tools blind us to the actual power that we do have to shape our futures in a tech-centric era.

“My concerns are three-part: First, as technology becomes embedded in daily life, it typically is less explicitly considered and less seen, just as we hardly give a thought to electric light. The recent Pew report on concerns about the increasing use of AI in daily life shows that 46% of Americans have equal parts excitement and concern over this trend, and 40% are more concerned than excited. But only 30% correctly fully identified where AI is being used, and nearly half think they do not regularly interact with AI, a level of apartness that is implausible given the ubiquity of smart phones and of AI itself. AI, in a nutshell, is not fully seen. As well, it’s alarming that the most vulnerable members of society – people who are less-well educated, have lower incomes, and/or are elderly – demonstrate the least awareness of AI’s presence in daily life and show the least concern about this trend.

“Second, mounting evidence shows that the use of technology itself easily can lead to habits of thought that breed intellectual complacency. Not only do we spend less time adding to our memory stores in a high-tech era, but ‘using the internet may disrupt the natural functioning of memory,’ according to researcher Benjamin Storm. Memory-making is less activated, data is decontextualized and devices erode time for rest and sleep, further disrupting memory processing. As well, device use nurtures the assumption that we can know at a glance. After even a brief online search, information seekers tend to think they know more than they actually do, even when they have learned nothing from a search, studies show. Despite its dramatic benefits, technology therefore can seed a cycle of enchantment, gullibility and hubris that then produces more dependence on technology.

“Finally, the market-driven nature of technology today muffles any concerns that are shown about devices. Consider the case of robot caregivers. Although a majority of Americans and people in EU countries say they would not want to use robot care for themselves or family members, such robots increasingly are sold on the market with little training, caveats or even safety features. Until recently, older people were not consulted in the design and production of robot caregivers built for seniors. Given the highly opaque, tone-deaf and isolationist nature of big-tech social media and AI companies, I am concerned that whatever skepticism that people may have for technology may be ignored by its makers.”

Louis Rosenberg: The boundary between the physical and digital worlds will vanish and tech platforms will know everything we do and say

Rosenberg, CEO and chief scientist at Unanimous AI, predicted, “As I look ahead to the year 2035, it’s clear to me that certain digital technologies will have an oversized impact on the human condition, affecting each of us as individuals and all of us as a society. These technologies will almost certainly include artificial intelligence, immersive media (VR and AR), robotics (service and humanoid robots) and powerful advancements in human-computer interaction (HCI) technologies. At the same time, blockchain technologies will continue to advance, likely enabling us to have persistent identity and transferrable assets across our digital lives, supporting many of the coming changes in AI, VR, AR and HCI.

“So, what are the best and most beneficial changes that are likely to occur? As a technologist who has worked on all of these technologies for over 30 years, I believe these disciplines are about to undergo a revolution driving a fundamental shift in how we interact with digital systems. For the last 60 years or so, the interface between humans and our digital lives has been through keyboards, mice and touchscreens to provide input and the display of flat media (text, images, videos) as output. By 2035, this will no longer be the dominant model. Our primary means of input will be through natural dialog enabled by conversational AI and our primary means of output will be rapidly transitioning to immersive experiences enabled through mixed-reality eyewear that brings compelling virtual content into our physical surroundings.

“I look at this as a fundamental shift from the current age of ‘flat computing’ to an exciting new age of ‘natural computing.’ That’s because by 2035, human interface technologies – both input and output – will finally allow us to interact with digital systems the way our brains evolved to engage our world: through natural experiences in our immediate surroundings via mixed reality and through natural human language, conversational AI.

“As a result, by 2035 and beyond, the digital world will become a magical layer that is seamlessly merged with our physical world. And when that happens, we will look back at the days when people engaged their digital lives by poking their fingers at little screens in their hands as quaint and primitive. We will realize that digital content should be all around us and should be as easy to interact with as our physical surroundings. At the same time, many physical artifacts (like service robots, humanoid robots and self-driving cars) will come alive as digital assets that we engage through verbal dialog and manual gestures. As a consequence, by the end of the 2030s the differences will largely disappear in our minds between what is physical and what is digital.

“I strongly believe that by 2035 our society will be transitioning from the current age of ‘flat computing’ to an exciting new age of ‘natural computing.’ This transition will move us away from traditional forms of digital content (text, images, video) that we engage today with mice, keyboards and touchscreens to a new age of immersive media (virtual and augmented reality) that we will engage mostly through conversational dialog and natural physical interactions.

“While this will empower us to interact with digital systems as intuitively as we interact with the physical world, there are many significant dangers this transition will bring. For example, the merger of the digital world and the physical world will mean that large platforms will be able to track all aspects of our daily lives – where we are, who we are with, what we look at, even what we pick up off store shelves. They will also track our facial expressions, vocal inflections, manual gestures, posture, gait and mannerisms (which will be used to infer our emotions throughout our daily lives). In other words, by 2035 the blurring of the boundaries between the physical and digital worlds will mean (unless restricted through regulation) that large technology platforms will know everything we do and say during our daily lives and will monitor how we feel during thousands of interactions we have each day.

“This is dangerous and it’s only half the problem. The other half of the problem is that conversational AI systems will be able to influence us through natural language. Unless strictly regulated, targeted influence campaigns will be enacted through conversational agents that have a persuasive agenda. These conversational agents could engage us through virtual avatars (virtual spokespeople) or through physical humanoid robots. Either way, when digital systems engage us through interactive dialog, they could be used as extremely persuasive tools for driving influence. For specific examples, I point you to a white paper “From Marketing to Mind Control” written in 2022 for the Future of Marketing Institute and to the 2022 IEEE paper “Marketing in the Metaverse and the Need for Consumer Protections .”

Wendy Grossman: Tech giants are losing ground, making room for new approaches that don’t involve privacy-invasive surveillance of the public

Grossman, a UK-based science writer, author of “net.wars” and founder of the magazine The Skeptic, wrote, “For the moment, it seems clear that the giants that have dominated the technology sector since around 2010 are losing ground as advertisers respond to social and financial pressures, as well as regulatory activity and antitrust actions. This is a good thing, as it opens up possibilities for new approaches that don’t depend on constant, privacy-invasive surveillance of Internet users.

“With any luck, that change in approach should spill over into the physical world to create smart devices that serve us rather than the companies that make them. A good example at the moment is smart speakers, whose business models are failing. Amazon is finding that consumers don’t want to use Alexa to execute purchases; Google is cutting back the division that makes Google Home.

“Similarly, the ongoing relentless succession of cyberattacks on user data might lead businesses and governments to recognize that large pools of data are a liability, and to adopt structures that put us in control of our own data and allow us to decide whom to share it with. In the UK, Mydex and other providers of personal data stores have long been pursuing this approach. …

“Many of the biggest concerns about life until 2035 are not specific to the technology sector: the impact of climate change and the disruption and migration it is already beginning to bring; continued inequality and the likely increase in old age poverty as Generation Rent reaches retirement age without the means to secure housing; the ongoing overall ill-health (cardiovascular disease, diabetes, dementia) that is and will be part of the legacy of the SARS-CoV-2 pandemic. These are sweeping problems that will affect all countries, and while technology may help ameliorate the effects, it can’t stop them. Many people never recovered from the 2008 financial crisis (see the movie ‘Nomadland’); the same will be true for those worst affected by the pandemic.

“In the short term, the 2023 explosion of new COVID-19 cases expected in China will derail parts of the technology industry; there may be long-lasting effects.

“I am particularly concerned about the increasing dependence on systems that require electrical power to work in all aspects of life. We rarely think in terms of providing alternative systems that we can turn to when the main ones go down. I’m thinking particularly of those pushing to get rid of cash in favor of electronic payments of all types, but there are other examples.

“If allowed to continue, the reckless adoption of new technology by government, law enforcement and private companies without public debate or consent will create a truly dangerous state. I’m thinking in particular of live facial recognition, which just a few weeks ago was used by MSG Entertainment to locate and remove lawyers attending concerts and shows at its venues because said lawyers happened to work for firms that are involved in litigation against MSG. (The lawyers themselves were not involved.) This way lies truly disturbing and highly personalized discrimination. Even more dangerous, the San Francisco Police Department has proposed to the city council that it should be allowed to deploy robots with the ability to maim and kill humans – only for use in the most serious situations, of course.

“Airports provide a good guide to the worst of what our world could become. In a piece I wrote in October 2022 , I outline what the airports of the future, being built today without notice or discussion, will be like: all-surveillance all the time, with little option to ask questions or seek redress for errors. Airports – and the Disney parks – provide a close look at how ‘smart cities’ are likely to develop.

“I would like to hope that decentralized sites and technologies like Mastodon, Discord and others will change the dominant paradigm for the better – but the history of cooperatives tends to show that there will always be a few big players. Email provides a good example. While it is still true that anyone can run an email server, it is no longer true that they can do so as an equal player in the ecosystem. Instead, it is increasingly difficult for a small server to get its connections accepted by the tiny handful of big players. Accordingly, the most likely outcome for Mastodon will be a small handful of giant instances, and a long, long tail of small ones that find it increasingly difficult to function. The new giants created in these federated systems will still find it hard to charge or sell ads. They will have to build their business models on ancillary services for which the social media function provides lock-in, just as today Gmail profits Google nothing, but it underpins people’s use of its ad-supported search engine, maps, Android phones, etc. This provides Google with a social graph it can use in its advertising business.”

Alf Rehn: The AI turf war will pit governments trying to control bad actors against bad actors trying to weaponize AI tools

Rehn, professor of innovation, design and management at the University of Southern Denmark, wrote, “Humans and technology rarely develop in perfect sync, but we will see them catching up. We’ve lived through a period in which digital tech has developed at speeds we’ve struggled to keep up with; there is too much content, too much noise and too much disinformation.

“Slowly but surely, we’re getting the tools to regain some semblance of control. AI used to be the monster under our beds, but now we’re seeing how we might make it our obedient dog (although some still fear it might be a cat in disguise). As new tools are released, we’re increasingly seeing people using them for fearless experimentation, finding ways to bend ever more powerful technologies to human wills. From fearing that AI and other technologies are going to take our jobs and make us obsolete, humans are finding ever more ways to elevate themselves with technology and making digital wrangling into not just the hobby of a few forerunners, but a new folk culture.

“There was a time when using electricity was something you could only do after serious education and a long apprenticeship. Today, we all know how a plug works. The same is happening in the digital space. Increasingly, digital technologies are being turned into something so easy to use, utilize and manipulate so that they become the modern equivalent of electricity. As every man, woman and child knows how to use an AI to solve a problem, digital technology becomes ever less scary and more and more the equivalent of building with Lego blocks. In 2035 the limits are not technological, but creative and communicative. If you can dream it and articulate it, digital technology can build it, improve upon it and help you transcend the limitations you thought you had.

“That is, unless a corporate structure blocks you.

“Spiderman’s Uncle Ben said, ‘With great power comes great responsibility.’ What happens when we all gain great power? The fact that some of us will act irresponsibly is already well known, but we also need to heed the backlash this all brings. There are great institutional powers at play that may not be that pleased with the power that the new and emerging digital technologies afford the general populace. At the same time, there is a distinct risk that radicalized actors will find ever more toxic ways to utilize the exponentially developing digital tools – particularly in the field of AI. A common fear in scary future scenarios is that AIs will develop to a point where they subjugate humanity. But right now, leading up to 2035, our biggest concern is the ways in which humans are and will be weaponizing AI tools.

“Where this places most of humanity is in a double bind. As digital technology becomes more and more powerful, state institutions will aim to curtail bad actors using it in toxic ways. At the same time, and for the same reason, bad actors will find ever more creative ways to use it to cheat, fool, manipulate, defraud and otherwise mess with us. The average Joe and/or Jane (if such a thing exists anymore) will be caught up in the coming AI turf wars, and some will become collateral damage.

“What this means is that the most menacing thing about digital technologies won’t be the tech itself, nor any one person’s deployment of the same, but being caught in the pincer movement of attempted control and wanton weaponization. We think we’ve felt this now, with the occasional social media post being quarantined, but things are about to get a lot, lot worse.

“Imagine having written a simple, original post, only to see it torn apart by content-monitoring software and at the same time endlessly repurposed by agents who twist your message to its very antithesis. Imagine this being a normal, daily affair. Imagine being afraid to even write an email, lest it becomes fodder in the content wars. Imagine tearing your children’s tech away, just to keep them safe for a moment longer.”

Garth Graham: We don’t understand what society becomes when machines are social agents

Graham, longtime Canadian networked communities leader, wrote, “Consider the widely accepted Internet Society phrase, ‘Internet Governance Ecology.’ In that phrase, what does the word ecology actually mean? Is the Internet Society’s description of Internet governance as ecology a metaphor, an analogy or a reality? And, if it is a reality, what are the consequences of accepting it?

“Digital technology surfaces the importance of understanding two different approaches to governance. Our current understanding of governance, including democracies, is hierarchical, mechanistic and measures things on an absolute scale. The rules about making rules are assumed to be applied externally from outside systems of governance. And this means that those with power assume their power is external to the systems they inhabit. The Internet, as a set of protocols for inter-networking, is based on a different assumption. Its protocols are grounded in a shift in epistemology away from the mechanistic and toward the relational.

“It is a common pool resource and an example of the governance of complex adaptive self-organizing systems. In those systems, the rules about making rules are internal to each and every element of the system. They are not externally applied. This complexity means that the adaptive outcomes of such systems cannot be predicted from the sum of the parts. The assumption of control by leadership inherent in the organization of hierarchical systems is not present. In fact, the external imposition of management practices on a complex adaptive system is inherently Disruptive of the system’s equilibrium. So the system, like a packet-switched network, has to route around it to survive. …

“I do not think we understand what society becomes when machines are social agents. Code is the only language that’s executable. It is able to put a plan or instruction or design into effect on its own. It is a human utterance (artifact) that, once substantiated in hardware, has agency. We write the code and then the code writes us. Artificial intelligence intensifies that agency. That makes necessary a shift in our assumptions about the structure of society. All of us now inhabit dynamic systems of human-machine interaction. That complexifies our experience. Yes, we make our networks and our networks make us. Interdependently, we participate in the world and thus change its nature. We then adapt to an altered nature in which we have participated. But the ‘we’ in those phrases now includes encoded agents that interact autonomously in the dynamic alteration of culture. Those agents sense, experience and learn from the environment, modifying it in the process, just as we do. This represents an increase in the complexity of society and the capacity for radical change in social relations.

“ Ursula Franklin’s definition of technology – ‘ Technology involves organization, procedures, symbols, new words, equations, and, most of all, it involves a mindset ’ – is that it is the way we do things around here. It becomes different as a consequence of a shift in the definition of ‘we.’ AI increases our capacity to modify the world, and thus alter our experience of it. But it puts ‘us’ into a new social space we neither understand nor anticipate.”

Kunle Olorundare: There will be universal acceptance of open-source applications to help make AI and robotics safe and smart

Olorundare, vice president of the Nigeria Chapter of the Internet Society, wrote, “Digital technology has come to stay in our lives for good. One area that excites me about the future is the use of artificial intelligence, which of course is going to shape the way we live by 2035. We have started to see the dividends of artificial intelligence in our society. Essentially, the human-centered development of digital tools and systems is safely advancing human progress in the areas of transportation, health, finances, energy harvesting and so on.

“As an engineer who believes in the power of digital technology, I see limitless opportunities for our transportation system. Beyond the personal driverless cars and taxis, by 2035, our public transportation will be taken over by remote-controlled buses with accurate timing with a marginal error of 0.0099 which will make us feel the needless use of personal cars. This will be cheaper without disappointment.

“Autonomous public transport will be pocket-friendly to the general citizenry. This will come with less pollution as energy harvesting from green sources will take a tremendous positive turn with the use of IoT and other digital technologies that harvest energy from multiple sources by estimating what amount of energy is needed and which green sources are available at a particular time with plus one redundancy. Hence minimal inefficiencies. Deployment of bigger drones that can come directly to your house to pick you up after identifying you and debiting your digital wallet account and confirming the payment will be a reality. The use of paper tickets will be a thing of the past as digital wallets to pay for all services will be ubiquitous.

“In regard to human connections, governance and institutions and the improvement of social and political interactions, by 2035, the body of knowledge will be fully connected. There will be universal acceptance of open-source applications that make it possible to have a globally robust body of knowledge in artificial intelligence and robotics. There will be less depression in society. If your friends are far away, robots will be available as friends you can talk to and even watch TV with and analyze World Cup matches as you might do with your friends. Robots will also be able to contribute to your research work even more than what ChatGPT is capable of today. …

“Human knowledge and its verifying, updating, safe archiving by open-source AI will make research easier. Human ingenuity will still be needed to add value – we will work on the creative angles while secondary research is being conducted by AI. This will increase contributions to the body of knowledge and society will be better off.

“Human health and well-being will benefit greatly from the use of AI, bringing about a healthy population as sicknesses and diseases can be easily diagnosed. Infectious diseases will become less virulent because of the use of robots in highly infectious pandemics and pandemics can easily be curbed. With enhanced big data using AI and ML, pandemics can be easily predicted and prevented, and the impact curve flattened in the shortest possible time using AI-driven pandemic management systems.”

“It is pertinent to also look at the other side of the coin as we gain positive traction on digital technologies. There will be concern about the safety of humans as technology is used by scoundrels for crime, mischief and other negative ends. Technology is often used to attack innocent souls. It can be used to manipulate the public or destroy political enemies, thus it is not necessarily always the ‘bad guys’ who are endangering our society. Human rights may be abused. For example, a government may want to tie us to one digital wallet through a central bank of digital currencies and dictate how we spend our money. These are issues that need to be looked at in order not to trample on human rights. Technological decolonization may also raise a concern as unique cultures may be eroded due to global harmonization. This can create an unequal society in which some sovereignty may benefit more than others.”

Jeff Jarvis: Let’s hope media culture changes and focus our attention on discovering, recommending and supporting good speech

Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, wrote, “I shall share several hopes and one concern:

  • “I hope that the tools of connection will enable more and more diverse voices to at last be heard outside the hegemonic control of mass media and political power, leading to richer, more inclusive public discourse.
  • “I hope we begin to see past the internet’s technology as technology and understand the net as a means to connect us as humans in a more open society and to share our information and knowledge on a more equitable and secure basis for the benefit of us all.
  • “I hope we might finally move beyond mass media’s current moral panic over the internet as competition and, indeed, supersede the worst of mass media’s failing institutions, beginning with the notion of the mass and media’s invention of the attention economy.
  • “I hope that – as occurred at the birth of print – we will soon turn our attention away from the futile folly of trying to combat, control and outlaw all bad speech and instead focus our attention and resources on discovering, recommending and supporting good speech.
  • “I hope the tools of AI – the subject of mass media’s next moral panic – will help people intimidated by the tools of writing and research to better express their ideas and learn and create.
  • “I hope we will have learned the lesson taught us by Elon Musk: that placing our discourse in the hands of centralized corporations is perilous and antithetical to the architecture and aims of the Internet; federation at the edge is a far better model.
  • “I hope that regulators will support opening data for researchers to study the impact and value of the net – and will support that work with necessary resources.

“I fear the pincer movement from right and left, media and politics, against Section 230 and protection of freedom of expression will lead to regulation that raises liability for holding public conversation and places a chill over it, granting protection to and extending the corrupt reign of mass media and the hedge-fund-controlled news industry.”

Maja Vujovic: We will have tools that keep us from drowning in data

Vujovic, owner and director of Compass Communications in Belgrade, Serbia, wrote, “New technologies don’t just pop up out of the blue; they grow through iterative improvements of conceivable concepts moved forward by bold new ideas. Thus, in the decade ahead, we will see advances in most of the key breakthroughs we already know and use (automation and robotics, sensors and predictive maintenance, AR and VR, gaming and metaverse, generative arts and chatbots and digital humans) as they mature into the mass mainstream.

“Much as spreadsheet tech sprouted in the 1970s and first thrived on mainframe computers but became adopted en masse when those apps migrated onto personal desktops, in the same way, we will witness in the coming years countless variations of apps for personal use of our current top-tier technologies.

“The most useful among those tech-granulation trends will be the use of complex tech in personalized health care. We will see very likable robots serve as companions to ailing children and as care assistants to infirm elderly. Portable sensors will graduate from superfluous swagger to life-saving utility. We are willing and able to remotely track our pets now, but gradually we will track our small children or parents with dementia as well.

“Drowning in data, we will have tools for managing other tools and widgets for automating our digital lives. Apps will work silently in the background, or in our sleep, tagging our personal photos, tallying our daily expenses, planning our celebrations or curating our one (combined) social media feed. Rather than supplanting us and scaling our creative processes (which by definition only works on a scale of one!) technology will be deployed where we need it the most, in support of what we do best – and that is human creation.

“To extract the full value from tools like chatbots, we will all soon need to master the arcane art of prompting AI. A prompt engineer is already a highly paid job. In the next decade, prompting AI will be an advanced skill at first, then a realm of licensed practitioners and eventually an academic discipline.

“Of course, we still have many concerns. One of them is the limitations imposed by the ways in which AI is now being trained on limited sets of data. Our most advanced digital technologies are a result of unprecedented aggregation. Top apps have enlisted almost half of the global population. The only foreseeable scenario for them is to keep growing. Yet our global linguistic capital is not evenly distributed.

“By compiling the vocabularies of languages with far fewer users than English or Chinese have, a handful of private enterprises have captured and processed the linguistic equity of not only English, or Hindu or Spanish, but of many small cultures as well, such as Serbian, Welsh or Sinhalese. Those cultures have far less capacity to compile and digitally process their own linguistic assets by themselves. While most benign at times of peace, this dis-balance can have grave consequences during more tense periods. Effectively, it is a form of digital supremacy, which in time might prove taxing on smaller, less wealthy cultures and economies.

“Moreover, technology is always at the mercy of other factors, which get to determine whether it is used or misused. The more potent the technologies at hand, the more damage they can potentially inflict. Having known war firsthand and having gone through the related swift disintegration of social, economic and technical infrastructure around me, I am concerned to think how utterly devastating such disintegration would be in the near future, given our total dependence on an inherently frail digital infrastructure.

“With our global communication signals fully digitized in recent times, there would be absolutely no way to get vital information, talk to distant relatives or collect funds from online finance operators, in case of any accidental or intentional interruptions or blockades of Internet service. Virtually all amenities of contemporary living – our whole digital life – may be canceled with a flip of a switch, without recourse. As implausible as this sounds, it isn’t impossible. Indeed, we have witnessed implausible events take place in the recent years. So, I don’t like the odds.”

Paul Jones: ‘We used to teach people how to use computers. Now we teach computers how to use people’

Jones, professor emeritus at UNC-Chapel Hill School of Information and Library Science, wrote, “There is a specter haunting the internet – the specter of artificial intelligence. All the powers of old thinking and knowledge production have entered into a holy (?) alliance to exorcise this specter: frenzied authors, journalists, artists, teachers, legislators and, most of all, lawyers. We are still waiting to hear from the pope.

“In education, we used to teach people how to use computers. Now, we teach computers how to use people. By aggregating all that we can of human knowledge production in nearly every field, the computers can know more about humans as a mass and as individuals than we can know of ourselves. The upside is these knowledgeable computers can provide, and will quickly provide, better access to health, education and in many cases art and writing for humans. The cost is a loss of personal and social agency at individual, group, national and global levels.

“Who wouldn’t want the access? But who wouldn’t worry, rightly, about the loss of agency? That double desire is what makes answering these questions difficult. ‘Best and most beneficial’ and ‘most harmful and menacing’ are opposite so much as co-joined. Like conjoined twins sharing essential organs and blood systems. Unlike for some such twins, no known surgery can separate them. Just as cars gave us, over a short time, a democratization of travel and at the same time became major agents of death – immediately in wrecks, more slowly via pollution – AI and the infrastructure to support it will give us untold benefits and access to knowledge while causing untold harm.

“We can predict somewhat the direction of AI, but more difficult will be how to understand the human response. Humans are now, or will soon be, co-joined to AI even if they don’t use it directly. AI will be used on everyone just as one need not drive or even ride in a car to be affected by the existence of cars. AI changes will emerge when it possesses these traits:

  • “Distinctive presences (aka voices but also avatars personalized to suit the listener/reader in various situations). These will be created by merging distinctive human writing and speaking voices, say maybe Bob Dylan + Bruce Springsteen.
  • “The ability to emotionally connect with humans (aka presentation skills).
  • “Curiosity. AI will do more than respond. It will be interactive and heuristic, offering paths that have not yet been offered – we have witnessed this AI behavior in the playing of Go and chess. AI will continue to present novel solutions.
  • “A broad and unique worldview. Because AI can be trained on all digitizable human knowledge and can avail itself of information from sensors more in variance with those open to humans. AI will be able to apply, say, Taoism to questions about weather.
  • “Empathy. Humans do not have an endless well of empathy. We tire easily. But AI can seem persistently and constantly empathetic. You may say that AI empathy isn’t real, but human empathy isn’t always either.
  • “Situational Awareness. Thanks to input from a variety of sensors, AI can and will be able to understand situations even better than humans.
  • “No area of knowledge work will be unaffected by AI and sensor awareness.

“How will we greet our robot masters? With fear, awe, admiration, envy and desire.”

Marjory Blumenthal: Technology outpaces our responses to unintended consequences

Blumenthal, senior adjunct policy researcher at RAND Corporation, wrote, “In a little over a decade, it is reasonable to expect two kinds of progress in particular: First are improvements in the user experience, especially for people with various impairments (visual, auditory, tactile, cognitive). A lot is said about diversity, equity and inclusion that focuses broadly on factors like income and education, but to benefit from digital technology requires an ability to use it that today remains elusive for many people for physiological reasons. Globally, populations are aging, a process that often confronts people with impairments they didn’t used to have (and of course many experience impairments from birth onward).

“Second, and notwithstanding concerns about concentration in many digital-tech markets, more indigenous technology is likely, at least to serve local markets and cultures. In some cases, indigenous tech will take advantage of indigenous data, which technological progress will make easier to amass and use, and more generally it will leverage a wider variety of talent, especially in the Global South, plus motivations to satisfy a wider variety of needs and preferences (including, but not limited to, support for human rights).

“There are two areas in which technology seems to get ahead of people’s ability to deal with it, either as individuals or through governance. One is the information environment. For the last few years, people have been coming to grips with manipulated information and its uses, and it has been easier for people to avoid the marketplace of ideas by sticking with channels that suit narrow points of view.

“Commentators lament the decline in trust of public institutions and speculate about a new normal that questions everything to a degree that is counterproductive. Although technical and policy mechanisms are being explored to contend with these circumstances, the underlying technologies and commercial imperatives seem to drive innovation that continues to outpace responses. For example, the ability to detect tends to lag the ability to generate realistic but false images and sound, although both are advancing.

“At a time when there has been a flowering of principles and ethics surrounding computing, new systems like ChatGPT with a high cool factor are introduced without any apparent thought to second- and third-order effects of using them – thoughtfulness takes time and risks loss of leadership. The resulting distraction and confusion likely will benefit the mischievous more than the rest of us – recognizing that crime and sex have long impelled uses of new technology.

“The second is safety. Decades of experience with digital technology have shown our limitations in dealing with cybersecurity, and the rise of embedded and increasingly automated technology introduces new risks to physical safety even as some of those technologies (e.g., automated vehicles) are touted as long-term improvers of safety.

“Responses are likely to evolve on a sector-by-sector basis, which might make it hard to appreciate interactions among different kinds of technology in different contexts. Although progress on the safety of individual technologies will occur over the next decade, the cumulation of interacting technologies will add complexity that will challenge understanding and response.”

David Porush: Advances may come if there are breakthroughs in quantum computing and the creation of a global court of criminal justice

Porush, author and longtime professor at Rensselaer Polytechnic Institute, wrote, “There will be positive progress in many realms. Quantum computing will become a partner to human creativity and problem solving. We’ve shown sophisticated brute force computing achieve this already with ChatGPT. Quantum computing will surprise us and challenge us to exceed ourselves even further and in much more surprising ways. It will also challenge former expectations about nature and the supernatural, physics and metaphysics. It will rattle the cage of scientific axioms of the mechanist-vitalism duality. This is a belief, and a hope, with only hints in empirical evidence.

“We might establish a new worldwide court of criminal justice. Utopian dreams that the World Wide Web and new social technologies might change human behavior have failed – note the ongoing human criminality, predation, tribalism, hate speech, theft and deception, demagoguery, etc. Nonetheless, social networks also enable us to witness, record and testify to bad behavior almost instantly, no matter where in the world it happens.

“By 2035 I believe this will promote the creation (or beginning of the discussion of the creation) of a new worldwide court of criminal justice, including a means to prosecute and punish individual war crimes and bad nation actors. My hope is that this court would supersede our current broken UN and come to apolitical verdicts based on empirical evidence and universal laws. Citizens pretty universally have shown they will give up rights to privacy to corporations for convenience. It would also imply that the panopticon of technologies used for spying and intrusion, whether for profit or totalitarian control by governments, will be converted to serve global good.

“Social networking contributes to scientific progress, especially in the field of virology. The global reaction to the arrival of COVID-19 showed the power of data gathering, data sharing and collaboration on analysis to combat a pandemic. Worldwide virology the past two years is a fine avatar of what could be done for all sciences. We can make more effective use of global computing in regard to resource distribution. Politicians and nations have not shown enough political will to really address long-term solutions to crises like global warming, water shortages and hunger. At least emerging data on these crises arm us with knowledge as the predicate to solutions. For instance, there’s not one less molecule of H 2 O available on Earth than a billion years ago; it’s just collected, made usable and distributed terribly.

“If we combine the appropriate level of political will with technological solutions (many of which we have in hand), we can distribute scarce resources and monitor harmful human or natural phenomena and address these problems with much more timely and effective solutions.”

Nandi Nobell: New interfaces in the metaverse and virtual reality will extend the human experience

Nobell, futurist designer and senior associate at CallisonRTKL, a global architecture, planning and design practice, wrote, “Whether physical, digital or somewhere in-between, interfaces to human experiences are all we have and have ever had. The body-mind (consciousness) construct is already fully dependent on naturally evolved interfaces to both our surroundings and our inner lives, which is why designing more intuitive and seamless ways of interacting with all aspects of our human lives is both a natural and relevant step forward – it is crossing our current horizon to experience the next horizon. With this in mind, extended reality (XR), the metaverse and artificial intelligence become increasingly important all the time as there are many evident horizons we are crossing through our current endeavours simply by pursuing any advancement.

“Whether it is the blockchain we know of today, or something more useful, user- and environmentally-friendly and smooth to integrate that can allow simplification of instant contracts and permission-less activities of all sorts, this can enable our world to verify source and quality of content, along with many other benefits.

“The best interfaces to experiences and services that can be achieved will influence what we can think and do, not just as tools and services in everyday life but also as the path to education, communication and so many other things. Improving our interfaces – both physical and digital make the difference between having and not having superpowers as we advance.

“Connecting a wide range of technologies that bridge physical and digital possibilities grows the reach of both. This also means that thinking of the human habitat as belonging to all areas that the body and mind can traverse is more useful than inventing new categories and silos by which we classify experiences. Whatever the future version of multifaceted APIs is, they have to be flexible, largely open and easy to use. Connectivity between ways, directions, clarity, etc., of communication can extend the reach and multiplication of any possibilities – new or old.

“Drawbacks and challenges face us in the years ahead. First comes data – if the FAANGs [Facebook/Meta, Amazon, Apple, Netflix, Google] of the world (non-American equivalents are equally bad) are allowed to remain even nearly as powerful as they are today, problems will become ever-greater, as their strength as manipulators of individuals grow deeper and more advanced. Manipulation will become vastly more advanced and difficult to recognize.

“Artificial intelligence is already becoming so powerful and versatile it can soon shape any imagery, audio and text or geometry in an instant. This means anyone with the computational resources and some basic tools can trick just about anyone into new thoughts and ideas. The owners of the greatest databanks of individuals’ and companies’ history and preferences can easily shape strategies to manipulate groups, individuals and entire nations into new behaviours.

“Why invest in anything if you will have it stolen at some point? Is some sort of perfect fraud-prevention system (blockchain or better) relevant in a future in which any ownership of any sort of asset class – digital or physical – is under threat of loss or distortion?

“Extended reality and the metaverse often get a bit of a beating for how they can make people more vulnerable to harassment, and this is a real threat, but artificial intelligence is vastly more scalable – essentially it could impact every human with access to digital technology more or less simultaneously, while online harassment in an immersive context is not scalable in a similar sense.

“Striking a comfortable and reasonable balance between safe and sane human freedom and surveillance technologies to keep a legit bottom line of this human safety is going to be hard to achieve. There will be further and deeper abuses in many cultures. This may create a digital world and lifestyle that branches off quite heavily from the non-digital counterparts, as digital lives can be expected to be surveilled while the physical can at least in principle be somewhat free of eavesdropping if people are not in view or earshot of a digital device. This being said, a state or company may still reward behaviour that trades data of all sorts from anything happening offline – which has been the case in dictatorships throughout history. The very use and manufacturing of technology may also cost the planet more than it provides the human experience, and as long as the promises of the future drive the value of stock and investments, we are not likely to understand when to stop advancing on a frontier that is on a roll.

“Health care will likely become both better and worse – the class divide grows greater gaps – but long-term it is probably better for most people. The underlying factors generally have more to do with human individual values rather than with the technologies themselves.

The Hindu Logo

  • Entertainment
  • Life & Style

essay about technology vision document 2035

To enjoy additional benefits

CONNECT WITH US

Whatsapp

Be vigilant so no one culture dominates others: Technology Vision 2035 document

Technology-guided cultural practices enrich the existing cultural diversity of the nation and do not replace it, states the document prepared by country’s technology think-tank tifac..

Updated - September 22, 2016 10:27 pm IST

Published - January 06, 2016 07:00 pm IST - Mysuru

Prime Minister Narendra Modi addresses the inaugural session of the 103rd Indian Science Congress at University of Mysore in Mysuru on Sunday.

Prime Minister Narendra Modi addresses the inaugural session of the 103rd Indian Science Congress at University of Mysore in Mysuru on Sunday.

Against the backdrop of “intolerance” debate in the country, the Technology Vision 2035 document released by Prime Minister Narendra Modi says people should be “especially vigilant” that no one culture is able to dominate others.

Stating that diversity in culture and languages is a key defining feature of India, the Technology Vision 2035 prepared by the country’s technology think-tank also said that caution has to be exercised to ensure that technology-guided cultural practices enrich the existing cultural diversity of the nation and do not replace it.

“Diversity in culture and languages are a key defining feature of India. These are at the very core of India’s existence and are its very soul, giving our country its various hues of differences and harmony and making us a vibrant nation,” said the document prepared by Technology Information, Forecasting & Assessment Council (TIFAC), an autonomous organisation under the Department of Science and Technology.

Stating that vision for India in 2035 cannot be complete without envisaging how this core aspiration-expectation would influence or be shaped by the realities of that time, it said regarding cultural diversity and vibrancy, we would like India to be “as advanced as possible technologically and as rooted as possible culturally.

“Cultural diversity and vibrancy is one among the twelve prerogatives that should be available to each and every Indian,” said the vision document released by Mr. Modi at the inaugural session of the 103rd Indian Science Congress in Mysuru on Sunday.

It also said ensuring the attainment of these prerogatives is the core of our technology vision for India.

Noting that cultural practices have very strong tendencies to influence us, it said, more often than not these influences are subtle and hidden and this is where the power of cultural practices truly lies.

“We need to be especially vigilant that no one culture is able to dominate others. Ever since the invention of the printing press, the advancement of technology in society has tended to promote monocultures.”

“Caution has to be exercised to ensure that technology guided cultural practices enrich the existing cultural diversity of the nation and do not replace it,” it said.

However, given the right direction, technology could help us in preserving and enhancing the rich cultural diversity of India.

Properly deployed, cultural diversity is a national asset and power multiplier, it added.

Related stories

  • Technology Vision 2035

Related Topics

technology (general) / scientific institutions

Top News Today

  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products

Terms & conditions   |   Institutional Subscriber

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.

Welcome, Login to your account.

Recover your password.

A password will be e-mailed to you.

Just In Print

Technology Vision 2035 – Putting Science to Use

'  data-src=

The Prime Minister unveiled the ‘Technology Vision Document 2035’ while inaugurating the 103rd Indian Science Congress on 3rd January 2016.

The document foresees the Indians of 2035, and technologies required for fulfilling their needs. It is not a visualization of technologies that will be available in 2035, but a vision of where our country and its citizens should be in 2035 and how technology should bring this vision to fruition.

The document is dedicated to late Dr.A.P.J.Abdul Kalam, the former President of India.

The Prime Minister in his foreword to the document hoped that the 12 Sectoral Technology roadmaps being prepared by Technology Information, Forecasting and Assessment Council, (TIFAC), which is also the author of this ‘Technology Vision 2035’ document, would excite our scientists and decision makers. He also said “India will be the country of young for the next few decades. It is imperative that every youth blossoms to his/her full potential and that the potential is fully tapped for the benefit of the nation. This in turn requires that needs of our children and youth for nutrition, health, knowledge, skill, connectivity and identity are met. Sh. Narendra Modi had called upon the intelligentsia, Universities and think tanks to actively work for fulfilling the vision”. After unveiling this document, in his speech Sh. Modi said that his government intends to integrate Science & Technology into choices it makes and strategies that it pursues.

The 12 identified sectors of Vision Document are:

Education Medical Sciences & Healthcare Food and Agriculture Water Energy Environment Habitat Transportation Infrastructure Manufacturing Materials Information and Communication Technology

Roadmaps, when prepared, will be presented to the Government of India and they would lead for further adoption of technologies in those sectors.

The document says that as technology is for empowering individual citizens, it will empower the country as well.

The Aim of this ‘Technology Vision Document 2035’ is to ensure the Security, Enhancing of Prosperity, and Enhancing Identity of every Indian, which is stated in the document as “Our Aspiration” or “Vision Statement” in all languages of the 8th Schedule of the Constitution. The Vision documents also identifies twelve (12) prerogatives- (six for meeting individual needs and six for the collective needs) that should be available to each and every Indian. These are:

Individual Prerogatives:- Clean air and potable water Food and nutritional security Universal healthcare and public hygiene 24×7 energy Decent habitat Quality education, livelihood and creative opportunities

Collective Prerogatives:- Safe and speedy mobility Public safety and national security Cultural diversity and vibrancy Transparent and effective governance Disaster and climate resilience Eco-friendly conservation of natural resources

Assurance of these prerogatives, according to the Vision document, is the core of technology vision for India. For assuring these prerogatives, technologies are mapped as: 1) those readily deployable, 2) those that needs to be moved from Lab to Field, 3) those that require targeted Research and 4) those that are still in Imagination. The last of these category of technologies could come about as a result of curiosity driven or paradigm- shattering ‘Blue-sky’ Research like on Internet of Things, Wearable Technology, Synthetic Biology, Brain computer Interface, Bio-printing and regenerative medicine. Precision agriculture and robotic farming, vertical farming, interactive foods, autonomous vehicles, Bioluminescence, 3D printing of buildings, earthquake prediction, weather modification technologies, green mining etc are some other such technologies expected that would go a long way in sustainably fulfilling the needs of the present and future generations of mankind.

To illustrate such mapping here is the table of categorization of various technologies for meeting the need of ‘Clean air and Potable Water’:

The vision document also makes a mention of three critical essential prerequisites or Transversal Technologiesi.e., materials, manufacturing, and Information and Communication technology (ICT) to provide the foundation upon which all other technologies would be constructed.

The document also talks of required infrastructure which it says primarily include relevant knowledge institutions besides ports, highways, airports, railways, cold chains, etc. Among the essential prerequisites, it also mentions fundamental research in the fields of physics, chemistry, biology and other allied sciences.

The document dwells upon the grand challenges in the field of Technologies which, it says, we should resolve as a nation. The challenges are:

• Guaranteeing nutritional security and eliminating female and child anaemia • Ensuring quantity and quality of water in all rivers and aquatic bodies • Providing learner centric, language neutral and holistic education to all • Developing commercially viable decentralized and distributed energy for all • Making India non-fossil fuel based • Securing critical resources commensurate with the size of our country • Ensuring universal eco-friendly waste management • Taking the railway to Leh and Tawang • Understanding national climate patterns and adapting to them • Ensuring location independent electoral and financial empowerment

There has also been a raging debate on the Social Impact of technology and the choice between capital intensive and manpower intensive. Capital intensive technology, especially in India with abundant human resources, has been projected as detrimental to the use of ‘Manpower’ as it is argued that it would reduce jobs. The Vision Document seeks to bust this myth by arguing in favor of judicious policy and conscious planning in employing technology to impart new skills to the manpower and fulfill needs of the society. It visualizes technology as a great leveler rather than as an enhancer of social stratification.

In order to overcome these challenges, the Vision Document 2035 envisages a rational assessment of the capabilities and constraints of the Indian Technological Landscape. It categorizes technologies into a six-fold classification from an Indian perspective which is as follows: • Technology Leadership – niche technologies in which we have core competencies, skilled manpower, infrastructure and a traditional knowledge base eg., Nuclear Energy, Space Science. • Technology Independence – strategic technologies that we would have to develop on our own as they may not be obtainable from elsewhere eg., Defence sector. • Technology Innovation – linking disparate technologies together or making a breakthrough in one technology and applying it to another eg., solar cells patterned on chlorophyll based synthetic pathway are a potent future source of renewable energy. • Technology Adoption – obtain technologies from elsewhere, modify them according to local needs and reduce dependence on other sources eg., foreign collaboration in the sectors of rainwater harvesting, agri-biotech, desalination, energy efficient buildings. • Technology Constraints – areas where technology is threatening and problematic i.e. having a negative social or environmental impact because of serious legal and ethical issues eg., Genetically Modified(GM) Crops.

The Vision Document, in a separate section, gives a ‘Call to Action’ to all the key stakeholders. It brings to notice that for long term sustainability of India’s technological prowess, it is important that • Technical Education Institutions engage in advanced research on a large scale leading to path-breaking innovations. • Government enhances its financial support from the current 1% to the long-envisaged 2% of the GDP. • the number of full-time equivalent Scientists in the core research sector should increase. • Private Sector Participation and Investment in evolving technologies that is readily deployable and is translatable from lab to field thereby increasing efficiency in terms of technology and economic returns. • Academia-Intelligentsia-Industry connect is established via idea exchange, innovative curricula design, based on the needs of the industry, industry-sponsored student internships and research fellowships inter alia. • Creation of an Research Ecosystem so as to achieve the translation of research to technology product/process by integrating students, researchers and entrepreneurs.

The document also identifies three key activities as a part of the ‘Call to Action’. The first being knowledge creation. It says that India cannot afford not to be in the forefront of the knowledge revolution, either applied or pure. The second activity that cannot be reflected, it says is ecosystem design for innovation and development. The document again interestingly says that the primary responsibility for ecosystem design must necessarily rests with government authorities. A third key activity that it mentions is technology deployment with launching certain national missions involving specific targets, defined timelines requiring only a few carefully defined identified players.

While this Vision document walks towards the future taking into consideration the country as a whole, the technology roadmap of each sector would provide of outlining future technology trends, R&D directives, pointers for research, anticipated challenges and policy imperatives pertaining to each sector.

*********** *Sh. K. Syama Prasad is Addl. Director General, PIB, New Delhi **Mr. Virat Majboor is Asst. Director, PIB, New Delhi

courtesy PIB

'  data-src=

Across the aisle: Crouching tiger vs hidden dragon

In what could be easily/effortlessly/unobtrusively/neutrally/unabashedly termed

STOP RAZING RAIL STATIONS, DIVERT RS 25K CR TO SIGNALS

BHEL MAKING BIG ADVANCES IN RESEARCH & DEVELOPMENT

IN A SEASON OF IMPETUOUS LAWMAKING, WHITHER NUCLEAR SAFETY?

India aims to achieve Defence Technological Sovereignty

Leave A Reply Cancel Reply

You must be logged in to post a comment.

IMAGES

  1. Night Vision Technology

    essay about technology vision document 2035

  2. Chapter 3 Synopsis Summer2021

    essay about technology vision document 2035

  3. Technology Vision 2035

    essay about technology vision document 2035

  4. Central Agency for Information Technology (CAIT) Supports the Kuwait Digital Transformation

    essay about technology vision document 2035

  5. Article 2

    essay about technology vision document 2035

  6. Vision 2020 is 21

    essay about technology vision document 2035

COMMENTS

  1. PDF Technology Vision 2035 Technoscape

    Under-5 mortality rate will reduce to 6/1000 by 2035 from the current 53 deaths/1000 live births (2013) Total health spending will be 5.7% of country's GDP by 2035, up from the current 4.0% (2013) Out-of-pocket health spending. will come down from the current 58.2% in 2013 of the total health-care to 30% by 2035.

  2. Technology Vision 2035 Visions, Technologies, Democracy and the Citizen

    This is not a vision of technologies available in 2035, per se; rather it is a vision of where our country and compatriots should be in 2035 and how technology would bring this vision to fruition. (TIFAC 2015: 21, emphasis added) And below is an account of the actual participation and how it was made to happen: A blend of the bottom-up and top ...

  3. Technology Vision 2035

    THE DOCUMENT BEGINS with a retrospective glance at Technology Vision 2020, TIFAC's first attempt to envision India's technology future. In analogy with the four gaits of the horse, various technological sectors have been categorised into galloping, cantering, trotting and walking India. The document articulates a vision for all Indians in 2035.

  4. Technology Vision Document 2035

    March 2, 2016 March 2, 2016. 'Technology Vision Document 2035' has been unveiled by the Indian Prime Minister at the 103 rd Indian Science Congress. The document presents a vision of requirements of Indian citizens in 2035 and how technology should bring this vision to fruition. The document has identified 12 sectors for development of ...

  5. (PDF) Education Technology Roadmap: Technology Vision 2035

    Education Technology Roadmap: Technology Vision 2035 is a policy document prepared for Technology Information, Forecasting and Assessment Council [TIFAC], New Delhi, Government of India, and ...

  6. Technology Vision 2035-Main Document

    TIFAC exhibits ubiquity in its offered services and has been pioneering in mapping future technology trajectories for the nation; came up with a variety of foresight reports comprising short, medium and long term time scale, Techno Market Survey reports, Technology Vision documents (TV 2020, TV 2035) and techno-economic feasibility reports on ...

  7. Science & Technology

    Working Paper on Science and Technology, Communications and Digital Technologies for Vision Document 2035. The Vertical has drafted a working paper on the vision for science and technology, communications and digital technologies. The paper was drafted after several rounds of consultations and deliberations with key stakeholders.

  8. PDF TECHNOLOGY VISION

    This document is dedicated to Dr. A P J Abdul Kalam, former President of India. ABOUT TIFAC ... TECHNOLOGY VISION 2035 PREAMBLE. 16 17 economic growth even as we exploit our resources, material and human alike, without endangering our environment. This necessitates defining long term

  9. Technology Vision Document 2035

    The Technology Information, Forecasting and Assessment Council (TIFAC), an autonomous body under the Government of India's Department of Science and Technology (DST), published India's "Technology Vision 2035" in early 2016. It is an account of what we can (and should) be as a people and a country in 2035. TV 2035 claims to be inspired by the "collective aspirations of Indians, the ambitions ...

  10. Narratives of Technology and Society Visioning in India

    39 TECHNOLOGY AND SOCIETY for instance, with Kumar's and Prasad's analyses because the vision of and for agriculture that TV-2035 embodies is built upon the narrative of the green revolution and ends up, therefore, as a vision that is unlikely to have any space or patience with the diversity and richness that characterise Indian society and ...

  11. PDF Technology Vision 2035

    power the country as well.The Aim of this 'Technology Vision Document 2035' is to ensure the Security, Enhancing of Prosperity, and Enhancing Identity of every Indian, which is stated in the document as "Our Aspiration" or "Vision Statement" in all languages of the 8th Sc. edule of the Constitution. The Vision documents also ...

  12. Technology Vision 2035

    TIFAC has made significant contributions to the Indian S&T system by bringing out technology vision documents, technology assessment and foresight reports besides supporting technology innovation, technology infusion in the micro, small and medium enterprises sector and patent facilitation. Technology Information, Forecasting and Assessment Council (TIFAC), the Government of India's autonomous ...

  13. TV 2035

    2017, Education Technology Roadmap: Technology Vision 2035. Education Technology Roadmap: Technology Vision 2035 is a policy document prepared for Technology Information, Forecasting and Assessment Council [TIFAC], New Delhi, Government of India, and published in November 2017 by TIFAC. It has been coauthored with Varun Sahni, Sita Naik, Dhruv ...

  14. Visions of the Internet in 2035

    Visions of the Internet in 2035 Asked to 'imagine a better world online,' experts hope for a ubiquitous - even immersive - digital environment that promotes fact-based knowledge, offers better defense of individuals' rights, empowers diverse voices and provides tools for technology breakthroughs and collaborations to solve the world's wicked problems

  15. Technology Vision 2035

    Web Information Manager. Website Content Owned & Managed by Department of Science & Technology (DST) Designed, Developed and Hosted by National Informatics Centre (NIC) Last Updated: 25 Nov 2016. The Department of Science & Technology plays a pivotal role in promotion of science & technology in the country.

  16. 2. Expert essays on the expected impact of digital change by 2035

    June 21, 2023. As AI Spreads, Experts Predict the Best and Worst Changes in Digital Life by 2035. 2. Expert essays on the expected impact of digital change by 2035. By Janna Anderson and Lee Rainie. Most respondents to this canvassing wrote brief reactions to this research question.

  17. Be vigilant so no one culture dominates others: Technology Vision 2035

    Against the backdrop of "intolerance" debate in the country, the Technology Vision 2035 document released by Prime Minister Narendra Modi says people should be "especially vigilant" that ...

  18. PDF VISIONIAS

    No part of this document may be reproduced, stored in a retrieval system or transmitted in any ... B.7. 103rd Indian Science Congress and Technology Vision Document 2035_____ 26 B.8. 23rd National Children Science Congress (NCSC) _____ 27 ... www.visionias.in ©Vision IAS A. DEFENCE AND SPACE TECHNOLOGY A.1. AKASH AIR DEFENCE MISSILE SYSTEM

  19. Vision Documents

    Vision Documents. 3rd & Final Draft NER Vision 2035. Vision Document 2030. Site Designed and Hosted by : National Informatics Centre. Content maintained and updated by : Planning, Investment Promotion & Sustainable Development Department, Government of Meghalaya,

  20. PDF VISION 2035 PUBLIC HEALTH SURVEILLANCE IN INDIA A WHITE PAPER

    th Surveillance in India by 2035 is a step in this direc. ion. It articulates the vision and describes building blocks. It envisions integration, enhanced citizen-centric and community-based surveillance, strengthened laboratory capacity, expanded referral networks, and a unified Surveillance Information P.

  21. Technology Vision 2035

    The Prime Minister unveiled the 'Technology Vision Document 2035' while inaugurating the 103rd Indian Science Congress on 3rd January 2016. The document foresees the Indians of 2035, and technologies required for fulfilling their needs. It is not a visualization of technologies that will be available in 2035, but a vision of where our ...

  22. Strategy

    The National Geospatial-Intelligence Agency published the agency's technology strategy May 29, highlighting its path to continued GEOINT dominance through improving internal processes and leveraging industry-leading technology. The NGA Technology Strategy outlines the current technology environment, the vision for tomorrow and how the agency ...

  23. Press Release: Press Information Bureau

    The white paper was released by NITI Aayog Vice Chairman Dr Rajiv Kumar; Member (Health) DrVinod K Paul; CEO Amitabh Kant; and Additional Secretary DrRakesh Sarwal. 'Vision 2035: Public Health Surveillance in India'is a continuation of the work on health systems strengthening. It contributes by suggesting mainstreaming of surveillance by ...