05-16-2019, 10:38 AM
Source : https://openai.com/blog/better-language-models/
https://talktotransformer.com/ I copied this from Law of One to see what comes up as answer ...........
"Let us for a moment consider thought. What is it, my friends, to take thought? Took you then thought today? What thoughts did you think today? What thoughts were part of the original thought today? In how many of your thoughts did the creation abide? Was love contained? And was service freely given? You are not part of a material universe. You are part of a thought. You are dancing in a ballroom in which there is no material. You are dancing thoughts. You move your body, your mind, and your spirit in somewhat eccentric patterns for you have not completely grasped the concept that you are part of the original thought."
answer to this text was : As I recall these ideas I did not take much thought for my own life. I was simply aware that when this thought and its thoughts were present in my mind I could not forget them or pretend that I did not see such thoughts. This awareness was called a sense of awareness and was a quality like consciousness, which came in a sense of separation from consciousness which gave this awareness its own existence and its own nature. But it didn't do me any good to think and thus to carry on the dancing
Built by Adam King (@AdamDanielKing) as an easier way to play with OpenAI's new machine learning model. In February, OpenAI unveiled a language model called GPT-2 that generates coherent paragraphs of text one word at a time.
For now OpenAI has decided only to release small and medium-sized versions of it which aren't as coherent but still produce interesting results. This site runs the new (May 3) medium-sized model, called 345M for the 345 million parameters it uses. If and when they release the full model, I'll likely get it running here.
While GPT-2 was only trained to predict the next word in a text, it surprisingly learned basic competence in some tasks like translating between languages and answering questions. That's without ever being told that it would be evaluated on those tasks
https://talktotransformer.com/ I copied this from Law of One to see what comes up as answer ...........
"Let us for a moment consider thought. What is it, my friends, to take thought? Took you then thought today? What thoughts did you think today? What thoughts were part of the original thought today? In how many of your thoughts did the creation abide? Was love contained? And was service freely given? You are not part of a material universe. You are part of a thought. You are dancing in a ballroom in which there is no material. You are dancing thoughts. You move your body, your mind, and your spirit in somewhat eccentric patterns for you have not completely grasped the concept that you are part of the original thought."
answer to this text was : As I recall these ideas I did not take much thought for my own life. I was simply aware that when this thought and its thoughts were present in my mind I could not forget them or pretend that I did not see such thoughts. This awareness was called a sense of awareness and was a quality like consciousness, which came in a sense of separation from consciousness which gave this awareness its own existence and its own nature. But it didn't do me any good to think and thus to carry on the dancing
Built by Adam King (@AdamDanielKing) as an easier way to play with OpenAI's new machine learning model. In February, OpenAI unveiled a language model called GPT-2 that generates coherent paragraphs of text one word at a time.
For now OpenAI has decided only to release small and medium-sized versions of it which aren't as coherent but still produce interesting results. This site runs the new (May 3) medium-sized model, called 345M for the 345 million parameters it uses. If and when they release the full model, I'll likely get it running here.
While GPT-2 was only trained to predict the next word in a text, it surprisingly learned basic competence in some tasks like translating between languages and answering questions. That's without ever being told that it would be evaluated on those tasks