Chat GPT- a legitimate work tool or a disciplinary offence.

By attorney Hagar Geyari Bezalel.

This article was initially published in Hebrew and translated into English by Ronnie Nadri. AI tools did some of the translation.

Artificial intelligence is part of our lives; it is also present in the workplace. The employer has to decide – whether to ban its use or take advantage of its benefits.

Artificial intelligence (such as Chat GPT) has recently entered our lives and become an integral part of the daily life of many of us. Given the range of possible uses, artificial intelligence has become an available tool in various fields, from improvising a recipe for dinner to completing academic assignments. It is clear that artificial intelligence is also present in the workplace.

The labour market is no stranger to technological changes. The automation of various systems, the possibility to work remotely, the access to quick information and more served employers and employees. This allows employers to save time and money and allows employees to specialize in different positions. Often technological changes were initially received with suspicion and, over time, became an integral part of the labour market.

Is artificial intelligence simply the next technological step in the labour market, or does it reflect a change in the rules of the game, which may cause more harm than good?

It’s easy to think of the benefits of artificial intelligence in the workplace. It can shorten work processes, open lines of thought, help improve writing, allow quick access to many sources of knowledge, and more. Tasks requiring many working hours can be carried out using artificial intelligence in a few minutes.

From this point of view, it seems that this is a legitimate and even desirable tool. Just as the computer, the Internet and search engines changed and made work more efficient, it seems that artificial intelligence can also contribute its part, and apparently in a more advanced way.

However, it cannot be ignored that this is not just another technological tool. In professions where it is essential to develop thought and creativity, to find solutions and breakthrough ideas, the use of artificial intelligence may lead to the degeneration of the independent and original thinking of the employees and to the encouragement of laziness.

It is also important to remember that at this stage, the degree of reliability of the information obtained through artificial intelligence has not yet been clarified. Therefore, automatic reliance on its products may lead to errors. In the current reality, it seems that artificial intelligence is here to stay and be part of the employees’ toolbox. Therefore, the ball is in the employers’ court, which must define its use’s limits.

One of the obvious questions is whether a duty of disclosure should be imposed on an employee who uses artificial intelligence as part of his work. Is an employee who presents a product obtained by using artificial intelligence as a result of his own labour raised in the employer’s trust?

Let’s assume a case where an employer asked an employee to prepare a professional presentation for a client. We will examine two scenarios: in one, the employee found a presentation on the Internet that fits his needs exactly and copied it; In the second, the employee asked the AI ​​to prepare a presentation. Seemingly, in both cases, the employee did not perform the work himself and did not reflect this to the employer. In the first case, where an employee copies a presentation of a third party, it can be seen as a breach of trust or even a disciplinary offence, but the second case is more complex.

Assuming that there was no express prohibition by the employer on using artificial intelligence or a policy requiring due disclosure, it is not necessarily possible to establish grounds for a breach of trust or a disciplinary offence.

The difficulty becomes even more acute in places where the employee does not exclusively use artificial intelligence but only uses it as an auxiliary tool. After all, what is the difference between improving the wording of an email message in English using artificial intelligence and the same operation using translation software? And what if the employee did not formulate the full message but only fed the main points to the artificial intelligence?

Therefore, the line between legitimate use, a disciplinary offence, and the limits of the duty of due disclosure are not unambiguous.

Another question arises: Who is responsible for the product’s reliability – the employee who used the artificial intelligence or the employer who made it possible?

A New York lawyer submitted a legal investigation using artificial intelligence to the court. The investigation included references to legal proceedings and even quotes from them, which in retrospect, turned out not to have happened at all but were invented by artificial intelligence.

In this case, where it was possible to conduct a quick check of the reliability of the information while comparing it to legal databases, it is easier to place the responsibility on the employee who did not perform the minimal review. But what about cases where the option to check the reliability of the information is more complex? What is the extent of the duty of review and supervision that can be imposed on the employee?

An employer who wants to enjoy the benefits of artificial intelligence in the workplace and, on the other hand, imposes a too extensive duty of control on the employees may significantly reduce the benefits of using this tool. If employees are required to invest significant time in carrying out reviews and supervision of the products that will be received, the element of saving time will disappear. Therefore, it seems that an employer who wishes to take advantage of the benefits will also be required to bear a certain degree of responsibility for the quality of the products and will not necessarily be able to transfer full responsibility to the employee.

Since it seems that artificial intelligence is here to stay, it would be wise to regulate a policy that defines the limits of its use in the workplace.

An employer may categorically prohibit the use of artificial intelligence. He may determine that the use for work purposes will constitute a disciplinary offence and even prevent access from the work computers.

However, this type of policy will not necessarily be enforceable, as it may be challenging to prove that artificial intelligence was indeed used. An employer who wishes to prevent the use of artificial intelligence, but does not establish such a policy, will find it difficult to take steps against an employee who made such use in good faith.

On the other hand, an employer may establish a policy that embraces the advantages of artificial intelligence but sets the limits of correct and appropriate use. Said policy can include provisions such as disclosure obligation; types of tasks in which the use is permitted; orderly procedure of verification and review; Types of confidential information of the employer that cannot be uploaded to the system, and more.

Since we are in new territory, it is essential to formulate an initial policy to give certainty to both employees and employers. Employers must follow and adapt work patterns to technological progress in a world where technological developments change rapidly.

It seems that artificial intelligence also recognizes its limits. We asked Chat GPT for its position on the matter. Artificial intelligence replied that it is a “legitimate work tool in a variety of contexts”, but “it is intended to expand human work and not replace it”, since “it sometimes produces incorrect responses” and therefore “it is essential to test and verify the product”.

You may also like...

Popular Posts

Leave a Reply