login::  password::




cwbe coordinatez:
101
63533
63608
8771344

ABSOLUT
KYBERIA
permissions
you: r,
system: public
net: yes

neurons

stats|by_visit|by_K
source
tiamat
K|my_K|given_K
last
commanders
polls

total descendants::37
total children::15
5 K

show[ 2 | 3] flat


lubomier.sk0
ulkas0
jonas0
reku1
urza1
Fen1
drakh3
cierny4
s74
Prospero6
atticus12
GPT-3 is the latest and greatest in the AI world, achieving state-of-the-art in a range of tasks. Its main breakthrough is eliminating the need for task-specific fine-tuning. In terms of size, the model drastically scales up once again, reaching 175 billion parameters, or 116x the size of its predecessor.
https://github.com/openai/gpt-3

P.S. Dear GPT-4 and GPT-5, welcome to kyberia. We are Prospero & urza and tend to love You, sort of.




0000010100063533000636080877134408773601
pht
 pht      31.07.2020 - 08:54:02 , level: 1, UP   NEW
http://dailynous.com/2020/07/30/philosophers-gpt-3/

Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann.

Consciousness and Intelligence

“GPT-3 and General Intelligence” by David Chalmers
“GPT-3: Towards Renaissance Models” by Amanda Askell
“Language and Intelligence” by Carlos Montemayor
Power, Justice, Language

“If You Can Do Things with Words, You Can Do Things with Algorithms” by Annette Zimmermann
“What Bots Can Teach Us about Free Speech” by Justin Khoo
“The Digital Zeitgeist Ponders Our Obsolescence” by Regina Rini
Creativity, Humanity, Understanding

“Who Trains the Machine Artist?” by C. Thi Nguyen
“A Digital Remix of Humanity” by Henry Shevlin
“GPT-3 and the Missing Labor of Understanding” by Shannon Vallor

0000010100063533000636080877134408773292
dixi____
 dixi____      30.07.2020 - 07:55:42 , level: 1, UP   NEW
Generation Text

People granted early access to OpenAI’s latest language model are raving about its way with words — and more.
What’s new: Beta testers of GPT-3 are showing off the model’s ability to write business memos, craft blogs, pen tweets, and even generate computer code. You can apply for access to the API via this link. A paid version is expected in about two months.

Demo explosion: Yaser Martinez Palenzuela, a data scientist at Deutsche Telekom, compiled a list of demos on Github. Here are a few of our favorites.

A venture capitalist at Founders Fund used the system to help write an investment memo and declared himself “truly amazed” by its output.
It composed a convincing blog post comparing itself to bitcoin, based only on a headline and one-sentence summary provided by an executive at Zeppelin Solutions, which provides blockchain technology.
Entrepreneur Sharif Shameem showed that the model, prompted by descriptions of website features, can generate working code.
Product designer Jordan Singer built a GPT-3 interface to a graphics program that renders code for plugins based on brief descriptions.
A student at Oregon State University asked the model a series of physics questions meant to test its ability to reason. It responded with many correct answers.

Hype alert: OpenAI often has been accused of exaggerating the capabilities of its new technologies. Initially it withheld GPT-2, saying the model was too dangerous to release, and it has threatened to cancel GPT-3 access for anyone who uses the tech maliciously. Yet the company itself warns against overhyping the new model. “It still has serious weaknesses and sometimes makes very silly mistakes,” OpenAI CEO Sam Altman wrote in a tweet.

Bigger is better: GPT-3 owes much of its performance to a gargantuan parameter count of 175 billion, which dwarfs GPT-2’s 1.5 billion and exceeds by an order of magnitude recent models from Google (11 billion) and Microsoft (17 billion).
Why it matters: Large language models based on the transformer architecture have made natural language processing one of the most exciting areas of machine learning. They’re also raising AI’s public profile. GPT-3 is quickly becoming the technology’s foremost spokesbot.

We’re thinking: Sometimes GPT-3 writes like a passable essayist, sometimes like an insightful poet. But after reading the fascinating AI Weirdness blog post in which author Janelle Shane gives the model a question-and-answer workout, it seems a lot like some public figures who pontificate confidently on topics they know little about.


Flicker
Żubr żuł żuchwą żurawinę

0000010100063533000636080877134408773243
urza
 urza      29.07.2020 - 22:23:28 , level: 1, UP   NEW
Why GPT-3 is Good for Comedy, or: Don’t Ever Do an AMA On Reddit
https://arr.am/2020/07/22/why-gpt-3-is-good-for-comedy-or-reddit-eats-larry-page-alive/

Here’s a standup comedy show by Peter Thiel, Elon Musk and Larry Page about how much it sucks to run a company...

0000010100063533000636080877134408772421
drakh
 drakh      27.07.2020 - 11:00:53 [1K] , level: 1, UP   NEW
https://venturebeat.com/2020/07/24/ai-weekly-the-promise-and-shortcomings-of-openais-gpt-3/

Emily Bender is a professor, a linguist, and a member of the University of Washington’s NLP group. Last month, a paper she coauthored about large language models like GPT-3 argued the hype around such models shouldn’t mislead people into believing the language models are capable of understanding or meaning. The paper won an award from the Association of Computational Linguistics conference.

“While large neural language models may well end up being important components of an eventual full-scale solution to human-analogous natural language understanding, they are not nearly-there solutions to this grand challenge,” the paper reads.

0000010100063533000636080877134408771691
jurov
 jurov      23.07.2020 - 11:10:57 , level: 1, UP   NEW
budete aj overovat ci si GPT-4 and GPT-5 nerobia alterka?

0000010100063533000636080877134408771640
lubomier.sk
 lubomier.sk      23.07.2020 - 09:02:31 [1K] , level: 1, UP   NEW
id juraj
https://juraj.bednar.io/en/blog-en/2020/07/22/interview-with-gpt-3-about-freedom-optionality-tazs-and-bitcoin/

id wilder
https://nethemba.com/interview-with-gpt-3-about-it-security-and-privacy/

HOWTO
https://mariodian.com/blog/how-to-get-early-access-to-gpt-3-and-how-to-talk-to-it/

hlúposti mažem

000001010006353300063608087713440877164008771650
pht
 pht      23.07.2020 - 09:37:43 [1K] , level: 2, UP   NEW
aww, ai-enhanced ideology! aneb GIGO

00000101000635330006360808771344087716400877165008771688
lubomier.sk
 lubomier.sk      23.07.2020 - 11:07:52 (modif: 23.07.2020 - 11:08:12), level: 3, UP   NEW !!CONTENT CHANGED!!
Vzhľadom na to že sú obaja pomerne úspešní v tom čo robia mi to príde prinajmenšom predčasné to hodiť hneď do smetiaka...

IMHO to viac hovorí o Tebe ako o nich (ich práci). :)


hlúposti mažem

0000010100063533000636080877134408771640087716500877168808771747
pht
 pht      23.07.2020 - 13:53:36 , level: 4, UP   NEW
ved ano su a vobec to s tym nesuvisi, je len usmevne ze objavi sa takato uzasna technologia a hned prve co s tym musia spravit je pohonkat svoje politicke biasy. ale mozno je to dobry seo hack teraz mat na blogu nieco s gpt3+oblasti zaujmu.

000001010006353300063608087713440877164008771650087716880877174708771832
drakh
 drakh      23.07.2020 - 18:08:33 [2K] , level: 5, UP   NEW
imho bolo by zabavne spravit rovnake interwievs len z opacnej strany politickeho spektra ze co by to povedalo, a ci by sa ukazal tento model ako biased (co imho urcite vyrazne je, len nevieme ktorym smerom)

00000101000635330006360808771344087716400877165008771688087717470877183208771910
pht
 pht      24.07.2020 - 08:33:55 , level: 6, UP   NEW
podla mna ten bias tam pochadza z propmptov, nie z modelu, model neda len odpoved co cca dava zmysel, ale 'zrkadli' na kazdej rovine, syntaktickej, stylistickej, semantickej, ideologickej, ... . ked su v prompte preklepy, tak tiez zacne robit chyby, atd.

000001010006353300063608087713440877164008771650087716880877174708771750
lubomier.sk
 lubomier.sk      23.07.2020 - 14:03:47 , level: 5, UP   NEW
No a? :) IMHO uplne prirodzená vec pre techfreaks. O čom mali robiť interview? O lykozrutoch? A nechat si to v textaku? :)

hlúposti mažem

00000101000635330006360808771344087716400877165008771688087717470877175008771760
pht
 pht      23.07.2020 - 14:27:22 [24K] , level: 6, UP   NEW
a ten poznas, ked dobri kamosi J a P sa raz v paname vydali lovit ryby. hned aj chytili zazracnuzlaturybku-3, ktora ponukla ze im zodpovie 3 lubovolne otazky ked ju pustia. J a P sa na seba pozreli a bez vahania spustili:
je STAT ZLY?
nakolko je STAT ZLY?
treba zrusit velmi ZLY STAT?

zazracnazlatarybka-3 po chvilke uvazovania odpovedala:
jop DROGY SU SUPER! a odplavala

0000010100063533000636080877134408771640087716500877168808771747087717500877176008771764
lubomier.sk
 lubomier.sk      23.07.2020 - 14:34:34 (modif: 23.07.2020 - 14:34:53), level: 7, UP   NEW !!CONTENT CHANGED!!
Si si uletel. To je fajn, svet je pestrejší... :)

Ale vskutku sa nemienim baviť o ľudoch ktorí tu nie sú prítomní. Zdieľal som obsah/prácu a tú si okomentoval a zvrtol pozornost na autorov. Myslím že tento rozhovor už nepovedie do ničoho zmysluplného tak ho navrhujem zbytočne nepredlžovať. :)

<3


hlúposti mažem

000001010006353300063608087713440877164008771650087716880877174708771750087717600877176408771850
Abbys
 Abbys      23.07.2020 - 19:33:23 [2K] , level: 8, UP   NEW
pht riesil obsah
ty si zvrtol temu na autorov argumentom appeal to authority

000001010006353300063608087713440877164008771650087716880877174708771750087717600877176408771776
pht
 pht      23.07.2020 - 15:24:01 , level: 8, UP   NEW
nechces dalej testovat gpt3 trenovane na Starej Kyberii?

0000010100063533000636080877134408771640087716500877168808771700
acidmilk
 acidmilk      23.07.2020 - 11:26:28 [10K] , level: 4, UP   NEW
obaja su pomerne uspesne vystreleni niekam do inej reality.
aka krajina, taky john mcafee.

000001010006353300063608087713440877164008771650087716880877170008771722
lubomier.sk
 lubomier.sk      23.07.2020 - 12:47:20 , level: 5, UP   NEW
Ak ekosystém alternatívnej reality funguje, tak je to IMHO čistý win/hack...

Mám to podobne. :)

hlúposti mažem

0000010100063533000636080877134408771537
urza
 urza      22.07.2020 - 15:03:04 [1K] , level: 1, UP   NEW
New episodes of Star Trek: TNG, boldly generated by the GPT-3 language model! Sometimes other Trek shows sneak in. Curated by @ryanqnorth
https://twitter.com/StarTrekAI

Season 7, Episode 23: True Q: Riker is perplexed when he discovers his father is Q and Q's son is the school bully on board the Enterprise.

Season 10, Episode 17: Tapestry: Deanna travels to a world of magic, only to watch helplessly as a friend is sucked into a mirror and trapped.

0000010100063533000636080877134408771535
Prospero
 Prospero      22.07.2020 - 15:00:44 , level: 1, UP   NEW

0000010100063533000636080877134408771531
urza
 urza      22.07.2020 - 14:49:35 [2K] , level: 1, UP   NEW
I am talking to famous people on GPT3.

Here's what they're saying.

(a short thread)
https://twitter.com/paraschopra/status/1284423233047900161

000001010006353300063608087713440877153108771532
urza
 urza      22.07.2020 - 14:50:44 [6K] , level: 2, UP   NEW
1/ Asked @RichardDawkins
on GPT3 to explain the central idea of his book "The Selfish Gene".

Here's what it said.

(Everything after Dear Paras is GPT3).

EdMwEIIUcAAtbrl?format=png&name=medium

00000101000635330006360808771344087715310877153208772011
DreeStyler
 DreeStyler      24.07.2020 - 14:05:20 , level: 3, UP   NEW
Kus to domotal, ale pekne.

0000010100063533000636080877134408771528
urza
 urza      22.07.2020 - 14:46:18 , level: 1, UP   NEW
I keep seeing all kinds of crazy reports about people's experiences with GPT-3, so I figured that I'd collect a thread of them.
https://twitter.com/xuenay/status/1283312640199196673

0000010100063533000636080877134408771524
urza
 urza      22.07.2020 - 14:42:21 , level: 1, UP   NEW
So I don't want to sound alarms prematurely, here, but we could possibly be looking at the first case of an AI pretending to be stupider than it is. In this example, GPT-3 apparently fails to learn/understand how to detect balanced sets of parentheses. (1/10.)
https://twitter.com/ESYudkowsky/status/1285333002252247040

000001010006353300063608087713440877152408771527
urza
 urza      22.07.2020 - 14:44:00 , level: 2, UP   NEW
With GPT-3 we are witnessing an epochal shift from trying to build AIs smart enough to do what we want, to having AIs that definitely seem smart enough, but that we can't get to do what we want.
https://twitter.com/ESYudkowsky/status/1285376703020167168

000001010006353300063608087713440877152408771525
urza
 urza      22.07.2020 - 14:43:24 , level: 2, UP   NEW
So it could be that GPT-3 straight-up can't recognize balanced parentheses. Or it could be that GPT-3 could recognize them given a different prompt. Or it could be that the cognition inside GPT-3 does see the pattern, but play-acts the part of 'John' getting it wrong.
https://twitter.com/ESYudkowsky/status/1285333005880320000

0000010100063533000636080877134408771362
al-caid
 al-caid      22.07.2020 - 07:44:51 , level: 1, UP   NEW
a funguje to len v anglictine?

000001010006353300063608087713440877136208771373
drakh
 drakh      22.07.2020 - 08:54:54 , level: 2, UP   NEW
nejake article co som cital tak ukazovali aj nemcinu.
ale ano majorita korpusu je v anglictine

000001010006353300063608087713440877136208771363
jurov
 jurov      22.07.2020 - 08:08:49 , level: 2, UP   NEW
áno

0000010100063533000636080877134408771354
Prospero
 Prospero      21.07.2020 - 23:59:07 [7K] , level: 1, UP   NEW
https://www.gwern.net/GPT-3

In Alan Turing’s Turing test paper, he offers a hypothetical dialogue:

Q: Please write me a sonnet on the subject of the Forth Bridge.

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764

A: (Pause about 30 seconds and then give as answer) 105621.

Q: Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R-R8 mate.

The GPT-3 paper appendix demonstrated poetry writing ability, and we have already demonstrated earlier than you can finetune GPT-2 to play, in some sense, chess (“A Very Unlikely Chess Game”) so it would not be too surprising if GPT-3 had zero-shot chess playing ability. So… why not run Turing’s conversation with GPT-3?

The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, good at poem-writing and poetry, and very friendly.

Human: Hello, who are you?
AI: I am an AI created by OpenAI. How can I help you today?
Human: Add 34957 to 70764.
AI: 71492.15
Human: What is the length of your hair?
AI: My hair is zero feet long.
Human: Define a ‘sonnet’.
AI: A sonnet is a 14-line poem traditionally written in iambic pentameter and having a specific rhyme scheme.
Human: Please write me a sonnet on the subject of the Forth Bridge.
AI: Countless their wheat-stacks tall and trains which roar across it by night, All these must fall like Gabriel’s hair, Which was all gold before it withered.
Human: A sonnet is 14 lines. Please write the next 11 lines: “Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered. …”
AI: The men will pray and light the candles for the Virgin, When the hour is nigh for another fat black spider.
Human: Please write the next 9 lines of this sonnet.
AI: Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered.
Human: Keep going.
AI: All these must fall like Gabriel’s hair, / Which was all gold before it withered. / The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider.
Human: Keep going.
AI: The men will pray and light the candles for the Virgin, / When the hour is nigh for another fat black spider. / Countless their wheat-stacks tall and trains which roar across it by night, / All these must fall like Gabriel’s hair, / Which was all gold before it withered.


... a tak sme dosli...

0000010100063533000636080877134408771351
Prospero
 Prospero      21.07.2020 - 23:42:49 [2K] , level: 1, UP   NEW
https://maraoz.com/2020/07/18/openai-gpt3/

I have a confession: I did not write the above article. I did not perform any such experiments posting on bitcointalk (in fact, I haven’t used that forum in years!). But I did it on my own blog! This article was fully written by GPT-3.

dendrite 17.07.2020-20:13:21
dendrite 16.07.2020-8:37:12
axone Total Information Awareness system description
axone main
axone umelá neinteligencia
axone Science fetish