login::  password::




cwbe coordinatez:
101
63533
63608
8771344
9011980

ABSOLUT
KYBERIA
permissions
you: r,
system: public
net: yes

neurons

stats|by_visit|by_K
source
tiamat
commanders
polls

total descendants::0
total children::0
show[ 2 | 3] flat


Dear W,

here is D, the Slovakoczechofrancogerman guy with whom You briefly interacted in Strasbourg

Primo, I would like to repeat my "thank You" for You&V&B&J& I-A Angelica are doing, to address the topic of AI and Education is, in my eyes, one of the most important challenges of our times and the human-rights-centered perspective is definitely one among two or three best ones which can be adopted.

Secundo, You asked me about the mini-experiment which one of my students did with GPT-3 and moral dilemmas, You will find the text in the attached file, the experiment / prompts / GPT-3 answers are on pages 6-9.

This brings me to a comment/question which I wanted to make yesterday during the finalizing discussion (but decided not to):

Given that the artificial systems which we are dealing with already exhibit certain amount of creativity, wouldn't it give sense to consider them as well as actors / stakeholders and potentially co-creators in the policy-making process ?

I mean, some of these systems are huge, trained on data resulting on works of millions who ever lived and left a trace, megawatts (gigawatts ?) of energy have been invested to train such systems, able to surprise their own creators. (One the best academic paper about GPT-3 was written by GPT-3 itself).

If that's the case - and in my eyes it is, but maybe I am just a biased geek and freak - wouldn't it be reasonable to have such systems involved in shaping of policies to which their subsequent derivatives should themselves abide ?