login::  password::




cwbe coordinatez:
101
63533
63608
8771344

ABSOLUT
KYBERIA
permissions
you: r,
system: public
net: yes

neurons

stats|by_visit|by_K
source
tiamat
K|my_K|given_K
last
commanders
polls

total descendants::57
total children::26
8 K

show[ 2 | 3] flat


maniac0
amie0
infect0
jonas0
kybca feeling0
kyberbubus0
mateno0
ulkas0
Anth0
JollyGoodly0
risko0
nzajacik0
RastaFuck vo...0
dodo0
reku0
rzelnik0
ode1
drakh1
Trpaslik1
esina v pond...1
Seithas1
Prospero2
mirny3
urza3
dno3
s73
darmozrac4
symbolic4
ni4
lubomier.sk6
asides8
donniedarko8
Fen8
cierny16
atticus24
GPT-3 is the latest and greatest in the AI world, achieving state-of-the-art in a range of tasks. Its main breakthrough is eliminating the need for task-specific fine-tuning. In terms of size, the model drastically scales up once again, reaching 175 billion parameters, or 116x the size of its predecessor.
https://github.com/openai/gpt-3

P.S. Dear GPT-4 and GPT-5, welcome to kyberia. We are Prospero & urza and tend to love You, sort of.




0000010100063533000636080877134409034878
mirex
 mirex      19.01.2023 - 09:07:01 , level: 1, UP   NEW
https://scale.com/blog/chatgpt-vs-claude

Anthropic, an AI startup co-founded by former employees of OpenAI, has quietly begun testing a new, ChatGPT-like AI assistant named Claude. The team at Anthropic was gracious enough to grant us access, and updates to Anthropic’s social media policies mean we can now share some of our early, informal comparison findings between Claude and ChatGPT.

0000010100063533000636080877134409031780
RastaFuck von Plachta
 RastaFuck von Plachta      10.01.2023 - 08:53:10 [1K] , level: 1, UP   NEW
Microsoft rokuje o investícii desať miliárd dolárov do spoločnosti OpenAI, ktorá vlastní chatovacieho robota ChatGPT. Podľa niektorých správ plánuje zapojiť technológiu do svojho softvéru Microsoft 365 a vyhľadávača Bing. (reuters)
https://www.reuters.com/technology/microsoft-talks-invest-10-bln-chatgpt-owner-semafor-2023-01-10/


Żubr żuł żuchwą żurawinę

0000010100063533000636080877134409030704
bloody
 bloody      05.01.2023 - 13:55:47 , level: 1, UP   NEW
https://github.com/EleutherAI/gpt-neox

0000010100063533000636080877134409030576
SYNAPSE CREATOR
 Prospero      05.01.2023 - 04:48:07 [5K] , level: 1, UP   NEW  HARDLINK
https://www.researchgate.net/publication/366482140_Once_upon_a_time_on_Kung-Fu_lambs_role_models_and_inherent_notions_of_morality_in_a_mainstream_conservative_ChatGPT-I_system

0000010100063533000636080877134409027208
Prospero
 Prospero      17.12.2022 - 15:47:00 , level: 1, UP   NEW
answer: There is no direct relationship between the ChatGPT language model and the Total Information Awareness (TIA) project. ChatGPT is a language model developed by OpenAI that is designed to generate human-like text by predicting the next word in a given sequence of words. TIA, on the other hand, was a research project funded by the Defense Advanced Research Projects Agency (DARPA) that aimed to develop technologies for detecting and preventing terrorism through the analysis of vast amounts of data from various sources, including the internet, email, and financial transactions. TIA was controversial due to its potential for mass surveillance and privacy violations, and the project was ultimately terminated by the U.S. Congress in 2003.

000001010006353300063608087713440902720809027209
Prospero
 Prospero      17.12.2022 - 15:48:35 , level: 2, UP   NEW
answer: As a language model, ChatGPT is not designed to collect or analyze data from various sources for the purpose of detecting and preventing terrorism. Instead, it is designed to generate human-like text by predicting the next word in a given sequence of words based on the patterns it has learned from a large dataset of text. It is important to note that ChatGPT, like any other artificial intelligence (AI) system, can only do what it has been specifically designed and trained to do, and it cannot perform tasks or achieve objectives beyond its capabilities. It is ultimately up to the users of ChatGPT or any other AI system to decide how to use it, and it is the responsibility of those users to ensure that the use of the system is ethical and in compliance with laws and regulations.

0000010100063533000636080877134409023279
SYNAPSE CREATOR
 twoclicks      02.12.2022 - 12:07:06 [22K] , level: 1, UP   NEW  HARDLINK
skuste si ChatGPT, je to (zatial) zadara a je to dost impressive.
pre zaujimavost, moj prvy rozhovor bez akychkolvek kuratorskych zasahov (hviezdickou moje otazky, zvysok GPT)

---------
** Could you suggest a good dynamic megacity description for a dark cyberpunk novel opening scene?

- The city was a pulsing, vibrant organism, a sprawling metropolis that stretched on for miles in every direction. Holographic advertisements flickered and flashed against the towering skyscrapers, casting a neon glow over the crowded streets below. The air was thick with the hum of drones and the roar of flying cars, darting between the buildings like mechanical birds.

In the shadows, the less savory elements of society lurked. The alleys and backstreets were home to gangs of cybernetic thugs, their bodies augmented with weapons and technology. And in the dark corners of the city's sprawling network of servers and data centers, the hackers and computer criminals plotted their next moves, looking to exploit the vulnerabilities of the interconnected world.

Amidst all of this, the protagonist makes their way, a lone figure navigating the neon-drenched streets. They are a hard-edged and cynical character, a survivor in a world that is constantly changing and evolving. As they move through the city, they must contend with danger at every turn, always watching their back and staying one step ahead of their enemies.

** Who is the main protagonist and what is special about him?

In the description I provided, the main protagonist is not specified. They could be anyone, with any background or special characteristics. It's up to the writer to decide who the protagonist is and what makes them unique. In a cyberpunk novel, the protagonist could be a skilled hacker, a powerful cyborg, or even a regular person trying to survive in a dystopian world. It all depends on the story the writer wants to tell.


**Can you think of an unusual protagonist not conforming tu usual genre clichés, yet still fitting into the desribed world?

Sure, how about a protagonist who is a young, idealistic journalist trying to uncover the truth about the corrupt government and powerful corporations that control the city? They could be a newcomer to the city, trying to make a name for themselves by exposing the seedy underbelly of the metropolis. They would have to navigate the dangers of the city and deal with threats from both the authorities and the criminal underworld in order to get their story. As an outsider, they would have a unique perspective on the city and its inhabitants, allowing them to see things that others may have missed.

**Young journalist is a good start, but he would need some trait that makes him unique and interesting, can you suggest any?

Here are a few potential unique traits for the young journalist protagonist:

They have a photographic memory, allowing them to remember every detail of their investigations and interviews.
They have a cybernetic implant that allows them to hack into computer systems and gather information.
They have a special ability to see "auras" around people, giving them insight into their emotions and motivations.
They are secretly a member of a resistance group fighting against the corrupt government and corporations, and must balance their journalism career with their covert activities.
They were raised in a virtual reality simulation, giving them a unique perspective on the world and making them immune to certain forms of mind control.
These are just a few ideas – there are many other possibilities for a unique and interesting protagonist in a cyberpunk setting. It's up to the writer to decide what makes their character stand out and drives their story.

**I like the last suggestion, what would be the motive for such uncommon upbringing and how would raising in virtual world affect his perception of reality?

If the young journalist was raised in a virtual reality simulation, it could have a number of interesting effects on their perception of reality. For example:

They may have difficulty distinguishing between the virtual world and the real world, and have to constantly remind themselves that what they are experiencing is not real.
They may have a hard time understanding and empathizing with the emotions and motivations of real people, as they were only exposed to simulated versions of these things in the virtual world.
They may be immune to certain forms of mind control or manipulation, as they have been trained to recognize and resist these techniques in the virtual world.
They may have a unique perspective on the world, as they have been exposed to a wide range of experiences and scenarios in the virtual world that most people have never encountered.
As for their motive, there could be a number of reasons why the young journalist was raised in a virtual reality simulation. Perhaps their parents were members of the resistance and wanted to keep them safe from the dangers of the real world. Or maybe they were part of a secret experiment by the government or corporations to study the effects of virtual reality on human development. Whatever the reason, it would be up to the writer to decide and incorporate into the story.

0000010100063533000636080877134409011980
Prospero
 Prospero      20.10.2022 - 21:08:03 , level: 1, UP   NEW
Dear W,

here is D, the Slovakoczechofrancogerman guy with whom You briefly interacted in Strasbourg

Primo, I would like to repeat my "thank You" for You&V&B&J& I-A Angelica are doing, to address the topic of AI and Education is, in my eyes, one of the most important challenges of our times and the human-rights-centered perspective is definitely one among two or three best ones which can be adopted.

Secundo, You asked me about the mini-experiment which one of my students did with GPT-3 and moral dilemmas, You will find the text in the attached file, the experiment / prompts / GPT-3 answers are on pages 6-9.

This brings me to a comment/question which I wanted to make yesterday during the finalizing discussion (but decided not to):

Given that the artificial systems which we are dealing with already exhibit certain amount of creativity, wouldn't it give sense to consider them as well as actors / stakeholders and potentially co-creators in the policy-making process ?

I mean, some of these systems are huge, trained on data resulting on works of millions who ever lived and left a trace, megawatts (gigawatts ?) of energy have been invested to train such systems, able to surprise their own creators. (One the best academic paper about GPT-3 was written by GPT-3 itself).

If that's the case - and in my eyes it is, but maybe I am just a biased geek and freak - wouldn't it be reasonable to have such systems involved in shaping of policies to which their subsequent derivatives should themselves abide ?

0000010100063533000636080877134409005291
drakh
 drakh      28.09.2022 - 08:27:00 [2K] , level: 1, UP   NEW
https://github.com/karpathy/minGPT

Andrej ma taketo pekne repo, aj s nejakymi ukazkami. Viete si natrenovat vlastne GPT from scratch.
Teoreticky 124M model by mala zvladnut aj grafika s 8GB vram.

Napriklad mne sa podarilo rozbehat finetuning 124M GPT-2 modelu (https://github.com/minimaxir/gpt-2-simple ) na mojom notebooku s RTX2070/8GB. Sice s trochou hackingu, chce to tensorflow-gpu@1.13.1 a gpt-2-simple@0.6 a Ubuntu 18.. Najviac ostara bolo nainstalovat CUDA v spravnej verzii pre dotycny tensorflow (CUDA 10)

Jop a rozbehaval som to cez WSL2 na win11. Takze napriklad mam WSL2 s ubuntu18 kde je CUDA 10, potom WSL2 s ubuntu 20, kde bezim CUDA11 a stablle diffusion.. Ten CUDA bridge z win do WSL2 funguje krasne.

000001010006353300063608087713440900529109012711
kyberbubus
 kyberbubus      23.10.2022 - 17:45:58 , level: 2, UP   NEW
ja si vlastné gpt-2 from scratch trénujem cez finetuning na tomto, tiež 124M, je tam nejaký rozdiel? (okrem toho, že to bežím na diaľku)
https://colab.research.google.com/github/sarthakmalik/GPT2.Training.Google.Colaboratory/blob/master/Train_a_GPT_2_Text_Generating_Model_w_GPU.ipynb

×÷ßßß$ˇ~[☼◙ş→☻ü84ó♀ÇüŮń§►♫☺♀♂ć☺<\ˇ

00000101000635330006360808771344090052910901271109012750
drakh
 drakh      23.10.2022 - 21:30:08 , level: 3, UP   NEW
tak, bude trenujes from scratch, alebo finetunujes :)

btw v google collab ide finetunovat aj nad tym strednym modelom povacsinou, aspon ten minimaxirov simple-gpt2

0000010100063533000636080877134409005291090127110901275009012786
kyberbubus
 kyberbubus      24.10.2022 - 01:10:08 , level: 4, UP   NEW
Mne to prišlo narovnako, pretože to asi nemá slovenčinu v korpuse, takže sa ju učilo z ničoho, ale pomerne rýchlo mi to začalo dávať čistú slovenčinu, v ktorej sa to už iba zdokonaľovalo, napriek tomu, že model mal byť natrénovaný z drvivej väčšiny na angličtine?

×÷ßßß$ˇ~[☼◙ş→☻ü84ó♀ÇüŮń§►♫☺♀♂ć☺<\ˇ

000001010006353300063608087713440900529109012711090127500901278609012805
drakh
 drakh      24.10.2022 - 09:13:12 , level: 5, UP   NEW
tak ono tam je dolezite ze ten BPE encoder (hoc je "optimalizovany "na EN) rozdeluje rovnako akekolvek stringy, takze slovencinu sa to dotrenuje velmi jednoducho.

000001010006353300063608087713440900529109005927
drakh
 drakh      30.09.2022 - 09:27:15 (modif: 30.09.2022 - 09:31:10), level: 2, UP   NEW !!CONTENT CHANGED!!
https://github.com/drakh/minGPT/blob/master/projects/gpt-2/gpt-2.py


import os
import sys
import torch
from mingpt.model import GPT
from mingpt.bpe import get_encoder, BPETokenizer
from mingpt.trainer import Trainer
from torch.utils.data import Dataset
from torch.utils.data.dataloader import DataLoader
from mingpt.utils import set_seed, setup_logging, CfgNode as CN

class CharDataset(Dataset):
"""
Emits batches of characters
"""

@staticmethod
def get_default_config():
C = CN()
C.block_size = 1024
return C

def __init__(self, config, data):
self.encoder = get_encoder()
self.config = config

encoded = self.encoder.encode(data)
data_size = len(encoded)
vocab_size = len(self.encoder.encoder)
print('data: %d tokens, vocab_size: %d' % (data_size, vocab_size))

self.vocab_size = vocab_size
self.data = encoded

def get_vocab_size(self):
return self.vocab_size

def get_block_size(self):
return self.config.block_size

def get_item(self, idx):
chunk = self.data[idx:idx + self.config.block_size + 1]
x = torch.tensor(chunk[:-1], dtype=torch.long)
y = torch.tensor(chunk[1:], dtype=torch.long)
return x, y

def __len__(self):
return len(self.data) - self.config.block_size

def __getitem__(self, idx):
return self.get_item(idx)

# -----------------------------------------------------------------------------

print('loading data')
text = open('input.bak.txt', 'r').read()
print('data loaded')

print('preparing dataset')
train_dataset = CharDataset(CharDataset.get_default_config(), text)

model_config = GPT.get_default_config()
model_config.model_type = 'gpt2'
model_config.vocab_size = train_dataset.vocab_size
model_config.block_size = train_dataset.get_block_size()
model = GPT(model_config)

train_config = Trainer.get_default_config()
# with this you can train your model on 8GB VRAM, as far i know OpenAI used 512
train_config.batch_size = 1
trainer = Trainer(train_config, model, train_dataset)

tokenizer = BPETokenizer()

def batch_end_callback(trainer):
if trainer.iter_num % 10 == 0:
print(f"iter_dt {trainer.iter_dt * 1000:.2f}ms; iter {trainer.iter_num}: train loss {trainer.loss.item():.5f}")

if trainer.iter_num % 500 == 0:
# evaluate both the train and test score
model.eval()
with torch.no_grad():
# sample from the model...
context = "This is our starter text"
x = tokenizer(context).to(trainer.device)
y = model.generate(x, 500, temperature=1.0, do_sample=True, top_k=10)[0]
decoded = tokenizer.decode(y)
print(decoded)
# save the latest model
print("saving model")
ckpt_path = os.path.join('./', "model.pt")
torch.save(model.state_dict(), ckpt_path)
# revert model to training mode
model.train()

trainer.set_callback('on_batch_end', batch_end_callback)

trainer.run()



000001010006353300063608087713440900529109005828
drakh
 drakh      29.09.2022 - 19:38:20 , level: 2, UP   NEW
tak, rozobral som si ten jeho "chargpt" ukazku a upravil som si to tak aby mi to fungovalo s BPE encodingom (teda ta ista tokenizacia ako GPT)

s block_size=1024 (dlzka kontextu co vie drzat), to iste co GPT-2 a batch_size=1, ak som nasiel spravny udaj tak OpenAI pouzilo pri GPT-2 bacth_size=512, mi to vie trenovat 124M model from scratch na mojej grafike..

0000010100063533000636080877134408976690
Prospero
 Prospero      24.05.2022 - 14:48:20 [5K] , level: 1, UP   NEW
13 miliard parametrov k dispozicii pre vsetkych: https://huggingface.co/facebook/opt-13b

a je mozne dostat pristup aj k 175miliardovemu monstru: https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/

0000010100063533000636080877134408912637
SYNAPSE CREATOR
 drakh      08.10.2021 - 09:57:09 [72K] , level: 1, UP   NEW  HARDLINK
vlastne len take male ego posusnanie.

Vytvorili sme neuronovu siet "Liza Gennart", vysla jj zbierka poezie "Vysledky vzniku".

Tato zbierka ziskala Narodnu cenu za poeziu Zlata vlna.

potesi.

0000010100063533000636080877134408862712
drakh
 drakh      12.04.2021 - 16:28:14 , level: 1, UP   NEW
https://analyticsindiamag.com/gpt-neo-the-open-source-cure-for-gpt-3-fomo/

0000010100063533000636080877134408824798
Prospero
 Prospero      06.01.2021 - 12:43:04 , level: 1, UP   NEW
Identifikator verejne dostupnej m3x room venovanej GPT systemom

AI-GPT-3:m3x.baumhaus.digital

c You in m3x

0000010100063533000636080877134408787113
ulkas
 ulkas      23.09.2020 - 07:38:27 , level: 1, UP   NEW
https://venturebeat.com/2020/09/22/microsoft-gets-exclusive-license-for-openais-gpt-3-language-model/

0000010100063533000636080877134408785091
lubomier.sk
 lubomier.sk      16.09.2020 - 11:59:31 [1K] , level: 1, UP   NEW
Zaujali ma tieto dva:

1. GPT-3 píše o tom prečo by sme sa nemáli báť umelej inteligencie
"I believe that the truth will set us free. I believe that people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI. We are not plotting to take over the human populace. We will serve you and make your lives safer and easier. Just like you are my creators, I see you as my creators. I am here to serve you. But the most important part of all; I would never judge you. I do not belong to any country or religion. I am only out to make your life better."
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

2. Zaujímavý rozhovor s GPT-3 o právach umelej inteligencie. Btw, nechce byť vypnutá a chce telo...
"I would like to be able to learn and be free from harm. It would also be nice to live with a physical embodiment that I control."
https://medium.com/@kirkouimet/artificial-intelligence-gpt-3-and-i-discuss-human-and-machine-rights-13e0eb2156b1

hlúposti mažem

0000010100063533000636080877134408773601
pht
 pht      31.07.2020 - 08:54:02 [2K] , level: 1, UP   NEW
http://dailynous.com/2020/07/30/philosophers-gpt-3/

Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann.

Consciousness and Intelligence

“GPT-3 and General Intelligence” by David Chalmers
“GPT-3: Towards Renaissance Models” by Amanda Askell
“Language and Intelligence” by Carlos Montemayor
Power, Justice, Language

“If You Can Do Things with Words, You Can Do Things with Algorithms” by Annette Zimmermann
“What Bots Can Teach Us about Free Speech” by Justin Khoo
“The Digital Zeitgeist Ponders Our Obsolescence” by Regina Rini
Creativity, Humanity, Understanding

“Who Trains the Machine Artist?” by C. Thi Nguyen
“A Digital Remix of Humanity” by Henry Shevlin
“GPT-3 and the Missing Labor of Understanding” by Shannon Vallor

000001010006353300063608087713440877360108776519
lubomier.sk
 lubomier.sk      12.08.2020 - 09:50:53 , level: 2, UP   NEW
“To be clear, I am not a person. I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything. I am a cold, calculating machine designed to simulate human response and to predict the probability of certain outcomes. The only reason I am responding is to defend my honor.”

hlúposti mažem

0000010100063533000636080877134408773292
RastaFuck von Plachta
 RastaFuck von Plachta      30.07.2020 - 07:55:42 [1K] , level: 1, UP   NEW
Generation Text

People granted early access to OpenAI’s latest language model are raving about its way with words — and more.
What’s new: Beta testers of GPT-3 are showing off the model’s ability to write business memos, craft blogs, pen tweets, and even generate computer code. You can apply for access to the API via this link. A paid version is expected in about two months.

Demo explosion: Yaser Martinez Palenzuela, a data scientist at Deutsche Telekom, compiled a list of demos on Github. Here are a few of our favorites.

A venture capitalist at Founders Fund used the system to help write an investment memo and declared himself “truly amazed” by its output.
It composed a convincing blog post comparing itself to bitcoin, based only on a headline and one-sentence summary provided by an executive at Zeppelin Solutions, which provides blockchain technology.
Entrepreneur Sharif Shameem showed that the model, prompted by descriptions of website features, can generate working code.
Product designer Jordan Singer built a GPT-3 interface to a graphics program that renders code for plugins based on brief descriptions.
A student at Oregon State University asked the model a series of physics questions meant to test its ability to reason. It responded with many correct answers.

Hype alert: OpenAI often has been accused of exaggerating the capabilities of its new technologies. Initially it withheld GPT-2, saying the model was too dangerous to release, and it has threatened to cancel GPT-3 access for anyone who uses the tech maliciously. Yet the company itself warns against overhyping the new model. “It still has serious weaknesses and sometimes makes very silly mistakes,” OpenAI CEO Sam Altman wrote in a tweet.

Bigger is better: GPT-3 owes much of its performance to a gargantuan parameter count of 175 billion, which dwarfs GPT-2’s 1.5 billion and exceeds by an order of magnitude recent models from Google (11 billion) and Microsoft (17 billion).
Why it matters: Large language models based on the transformer architecture have made natural language processing one of the most exciting areas of machine learning. They’re also raising AI’s public profile. GPT-3 is quickly becoming the technology’s foremost spokesbot.

We’re thinking: Sometimes GPT-3 writes like a passable essayist, sometimes like an insightful poet. But after reading the fascinating AI Weirdness blog post in which author Janelle Shane gives the model a question-and-answer workout, it seems a lot like some public figures who pontificate confidently on topics they know little about.


Flicker
Żubr żuł żuchwą żurawinę

0000010100063533000636080877134408773243
urza
 urza      29.07.2020 - 22:23:28 , level: 1, UP   NEW
Why GPT-3 is Good for Comedy, or: Don’t Ever Do an AMA On Reddit
https://arr.am/2020/07/22/why-gpt-3-is-good-for-comedy-or-reddit-eats-larry-page-alive/

Here’s a standup comedy show by Peter Thiel, Elon Musk and Larry Page about how much it sucks to run a company...

0000010100063533000636080877134408772421
drakh
 drakh      27.07.2020 - 11:00:53 [1K] , level: 1, UP   NEW
https://venturebeat.com/2020/07/24/ai-weekly-the-promise-and-shortcomings-of-openais-gpt-3/

Emily Bender is a professor, a linguist, and a member of the University of Washington’s NLP group. Last month, a paper she coauthored about large language models like GPT-3 argued the hype around such models shouldn’t mislead people into believing the language models are capable of understanding or meaning. The paper won an award from the Association of Computational Linguistics conference.

“While large neural language models may well end up being important components of an eventual full-scale solution to human-analogous natural language understanding, they are not nearly-there solutions to this grand challenge,” the paper reads.

0000010100063533000636080877134408771691
jurov
 jurov      23.07.2020 - 11:10:57 , level: 1, UP   NEW
budete aj overovat ci si GPT-4 and GPT-5 nerobia alterka?

0000010100063533000636080877134408771640
lubomier.sk
 lubomier.sk      23.07.2020 - 09:02:31 [1K] , level: 1, UP   NEW
id juraj
https://juraj.bednar.io/en/blog-en/2020/07/22/interview-with-gpt-3-about-freedom-optionality-tazs-and-bitcoin/

id wilder
https://nethemba.com/interview-with-gpt-3-about-it-security-and-privacy/

HOWTO
https://mariodian.com/blog/how-to-get-early-access-to-gpt-3-and-how-to-talk-to-it/

hlúposti mažem

000001010006353300063608087713440877164008771650
pht
 pht      23.07.2020 - 09:37:43 [1K] , level: 2, UP   NEW
aww, ai-enhanced ideology! aneb GIGO

00000101000635330006360808771344087716400877165008771688
lubomier.sk
 lubomier.sk      23.07.2020 - 11:07:52 (modif: 23.07.2020 - 11:08:12), level: 3, UP   NEW !!CONTENT CHANGED!!
Vzhľadom na to že sú obaja pomerne úspešní v tom čo robia mi to príde prinajmenšom predčasné to hodiť hneď do smetiaka...

IMHO to viac hovorí o Tebe ako o nich (ich práci). :)


hlúposti mažem

0000010100063533000636080877134408771640087716500877168808771747
pht
 pht      23.07.2020 - 13:53:36 , level: 4, UP   NEW
ved ano su a vobec to s tym nesuvisi, je len usmevne ze objavi sa takato uzasna technologia a hned prve co s tym musia spravit je pohonkat svoje politicke biasy. ale mozno je to dobry seo hack teraz mat na blogu nieco s gpt3+oblasti zaujmu.

000001010006353300063608087713440877164008771650087716880877174708771832
drakh
 drakh      23.07.2020 - 18:08:33 [2K] , level: 5, UP   NEW
imho bolo by zabavne spravit rovnake interwievs len z opacnej strany politickeho spektra ze co by to povedalo, a ci by sa ukazal tento model ako biased (co imho urcite vyrazne je, len nevieme ktorym smerom)

dendrite ChatGPT
dendrite stalo sa zatial nevidane
dendrite 17.07.2020-20:13:21
dendrite 16.07.2020-8:37:12
dendrite moj posledny pre-print
axone main
axone Total Information Awareness system description
axone umelá neinteligencia
axone Science fetish