April 24, 2023
2 mins read

ChatGPT fails when it comes to accounting

On a 11.3 per cent of questions, ChatGPT scored higher than the student average, doing particularly well on AIS and auditing…reports Asian Lite News

AI chatbot ChatGPT is still no match for humans when it comes to accounting and while it is a game changer in several fields, the researchers say the AI still has work to do in the realm of accounting.

Microsoft-backed OpenAI has launched its newest AI chatbot product, GPT-4 which uses machine learning to generate natural language text, passed the bar exam with a score in the 90th percentile, passed 13 of 15 advanced placement (AP) exams and got a nearly perfect score on the GRE Verbal test.

“It’s not perfect; you’re not going to be using it for everything,” said Jessica Wood, currently a freshman at Brigham Young University (BYU) in the US. “Trying to learn solely by using ChatGPT is a fool’s errand.”

Researchers at BYU and 186 other universities wanted to know how OpenAI’s tech would fare on accounting exams. They put the original version, ChatGPT, to the test.

“We’re trying to focus on what we can do with this technology now that we couldn’t do before to improve the teaching process for faculty and the learning process for students. Testing it out was eye-opening,” said lead study author David Wood, a BYU professor of accounting.

Although ChatGPT’s performance was impressive, the students performed better.

Students scored an overall average of 76.7 per cent, compared to ChatGPT’s score of 47.4 per cent.

On a 11.3 per cent of questions, ChatGPT scored higher than the student average, doing particularly well on AIS and auditing.

But the AI bot did worse on tax, financial, and managerial assessments, possibly because ChatGPT struggled with the mathematical processes required for the latter type, said the study published in the journal Issues in Accounting Education.

When it came to question type, ChatGPT did better on true/false questions and multiple-choice questions, but struggled with short-answer questions.

In general, higher-order questions were harder for ChatGPT to answer.

“ChatGPT doesn’t always recognise when it is doing math and makes nonsensical errors such as adding two numbers in a subtraction problem, or dividing numbers incorrectly,” the study found.

ChatGPT often provides explanations for its answers, even if they are incorrect. Other times, ChatGPT’s descriptions are accurate, but it will then proceed to select the wrong multiple-choice answer.

“ChatGPT sometimes makes up facts. For example, when providing a reference, it generates a real-looking reference that is completely fabricated. The work and sometimes the authors do not even exist,” the findings showed.

That said, authors fully expect GPT-4 to improve exponentially on the accounting questions posed in their study.

ALSO READ-Musk’s ‘TruthGPT’ to rival ChatGPT

Previous Story

‘India to lead in fulfilling global aspirations of 21st century’

Next Story

Google to let users co-present Slides in Meet

Latest from Tech Lite

Army launches AI incubation centre in Bengaluru 

This centre is set to develop cutting-edge data-driven solutions, improving decision-making, operational efficiency, and readiness for AI-driven warfare within the Army….reports Asian Lite News Indian Army chief Gen Upendra Dwivedi on Wednesday

AI Powers India’s Green Push 

Most Indian executives (96 per cent) believe that AI can positively impact their sustainability goals….reports Asian Lite News About 98 per cent of Indian business leaders are planning to increase investment in

Indian Railways achieves 97% electrification 

Minister highlighted a significant increase in the pace of electrification, from approximately 1.42 km per day during 2004-14 to about 19.7 km per day in 2023-24…reports Asian Lite News Indian Railways is
Go toTop

Don't Miss

US med school experiments on Chat-GPT 4

Chat-GPT 4 also provided the correct diagnosis in its list

Musk’s ‘TruthGPT’ to rival ChatGPT

The revelation comes as the billionaire has created a new