html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/157 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)" | I am getting this error when running this command
```
val = nlp.load_dataset('squad', split="validation")
```
FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/dataset_info.json'
Can anybody help? | I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a | 27 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)"
I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a
I a... | [
0.0945210531,
-0.0742242113,
-0.03241501,
0.4672498703,
0.2305200845,
0.2202828079,
0.137144044,
0.2303495407,
0.1524402946,
-0.1321381629,
0.0483737402,
0.4689955711,
-0.1310246587,
-0.1885151565,
0.2510723472,
-0.2700097561,
-0.1781658679,
0.300565064,
0.2233231217,
-0.083604... |
https://github.com/huggingface/datasets/issues/157 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)" | It seems like your download was corrupted :-/ Can you run the following command:
```
rm -r /root/.cache/huggingface/datasets
```
to delete the cache completely and rerun the download? | I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a | 28 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)"
I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a
It ... | [
-0.1208355278,
-0.0351682156,
-0.0817960426,
0.5541926026,
0.3025416136,
0.1315353066,
0.0039046146,
0.139386341,
0.2804363072,
-0.1001442075,
-0.1951358318,
0.3699631393,
-0.1204649135,
0.0582581647,
0.3326440156,
-0.4019226134,
-0.1763339192,
0.2947905064,
-0.1826589853,
-0.0... |
https://github.com/huggingface/datasets/issues/156 | SyntaxError with WMT datasets | Jeez - don't know what happened there :D Should be fixed now!
Thanks a lot for reporting this @tomhosking ! | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.... | 20 | SyntaxError with WMT datasets
The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in ru... | [
-0.3297367394,
0.0324436054,
-0.0193531327,
0.0127458628,
0.2164736986,
-0.014922413,
0.3609465659,
0.5264437795,
0.1301546246,
-0.1121619493,
0.0932864547,
0.4350024164,
-0.3767535388,
0.1414317042,
0.1508706957,
-0.1754978597,
-0.0268027131,
0.1413965076,
-0.358635813,
0.1519... |
https://github.com/huggingface/datasets/issues/156 | SyntaxError with WMT datasets | Hi @patrickvonplaten!
I'm now getting the below error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-3206959998b9> in <module>
1 import nlp
2
----> 3 dataset = nlp.loa... | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.... | 76 | SyntaxError with WMT datasets
The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in ru... | [
-0.3297367394,
0.0324436054,
-0.0193531327,
0.0127458628,
0.2164736986,
-0.014922413,
0.3609465659,
0.5264437795,
0.1301546246,
-0.1121619493,
0.0932864547,
0.4350024164,
-0.3767535388,
0.1414317042,
0.1508706957,
-0.1754978597,
-0.0268027131,
0.1413965076,
-0.358635813,
0.1519... |
https://github.com/huggingface/datasets/issues/156 | SyntaxError with WMT datasets | To correct this error I think you need the master branch of `nlp`. Can you try to install `nlp` with. `WMT` was not included at the beta release of the library.
Can you try:
`pip install git+https://github.com/huggingface/nlp.git`
and check again? | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.... | 40 | SyntaxError with WMT datasets
The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in ru... | [
-0.3297367394,
0.0324436054,
-0.0193531327,
0.0127458628,
0.2164736986,
-0.014922413,
0.3609465659,
0.5264437795,
0.1301546246,
-0.1121619493,
0.0932864547,
0.4350024164,
-0.3767535388,
0.1414317042,
0.1508706957,
-0.1754978597,
-0.0268027131,
0.1413965076,
-0.358635813,
0.1519... |
https://github.com/huggingface/datasets/issues/156 | SyntaxError with WMT datasets | That works, thanks :)
The WMT datasets are listed in by `list_datasets()` in the beta release on pypi - it would be good to only show datasets that are actually supported by that version? | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.... | 34 | SyntaxError with WMT datasets
The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in ru... | [
-0.3297367394,
0.0324436054,
-0.0193531327,
0.0127458628,
0.2164736986,
-0.014922413,
0.3609465659,
0.5264437795,
0.1301546246,
-0.1121619493,
0.0932864547,
0.4350024164,
-0.3767535388,
0.1414317042,
0.1508706957,
-0.1754978597,
-0.0268027131,
0.1413965076,
-0.358635813,
0.1519... |
https://github.com/huggingface/datasets/issues/156 | SyntaxError with WMT datasets | Usually, the idea is that a dataset can be added without releasing a new version. The problem in the case of `WMT` was that some "core" code of the library had to be changed as well.
@thomwolf @lhoestq @julien-c - How should we go about this. If we add a dataset that also requires "core" code changes, how do we han... | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.... | 116 | SyntaxError with WMT datasets
The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in ru... | [
-0.3297367394,
0.0324436054,
-0.0193531327,
0.0127458628,
0.2164736986,
-0.014922413,
0.3609465659,
0.5264437795,
0.1301546246,
-0.1121619493,
0.0932864547,
0.4350024164,
-0.3767535388,
0.1414317042,
0.1508706957,
-0.1754978597,
-0.0268027131,
0.1413965076,
-0.358635813,
0.1519... |
https://github.com/huggingface/datasets/issues/156 | SyntaxError with WMT datasets | We plan to have something like a `requirements.txt` per dataset to prevent user from loading dataset with old version of `nlp` or any other libraries. Right now the solution is just to keep `nlp` up to date when you want to load a dataset that leverages the latests features of `nlp`.
For datasets that are on AWS but... | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.... | 122 | SyntaxError with WMT datasets
The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in ru... | [
-0.3297367394,
0.0324436054,
-0.0193531327,
0.0127458628,
0.2164736986,
-0.014922413,
0.3609465659,
0.5264437795,
0.1301546246,
-0.1121619493,
0.0932864547,
0.4350024164,
-0.3767535388,
0.1414317042,
0.1508706957,
-0.1754978597,
-0.0268027131,
0.1413965076,
-0.358635813,
0.1519... |
https://github.com/huggingface/datasets/issues/153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file. | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl... | 47 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly r... | [
0.0246245619,
0.1188307703,
-0.0451855846,
0.0450589061,
0.0441776253,
-0.037619736,
0.1662005782,
0.4080996215,
-0.016346585,
-0.0382258557,
-0.1563760489,
0.4495480955,
0.0651096851,
0.1650390029,
0.2494655997,
0.1432769001,
-0.083108753,
0.1260277182,
-0.0986575484,
-0.12663... |
https://github.com/huggingface/datasets/issues/153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | Actually, double checking with @mariamabarham, we already have this feature I think.
It's like this currently:
```python
>>> from nlp import load_dataset
>>>
>>> dataset = load_dataset('glue', 'cola', split='train')
>>> print(dataset.info.citation)
@article{warstadt2018neural,
title={Neural Network Accepta... | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl... | 115 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly r... | [
0.1459960192,
0.1260701865,
-0.0172190554,
0.0895722434,
0.0626267493,
-0.0980595797,
0.1901407689,
0.327424109,
-0.1365866512,
0.0089797759,
-0.1452715546,
0.5205234885,
0.1002959386,
0.0326512977,
0.3782191277,
0.1075442731,
-0.1473997235,
-0.0629666075,
0.0565607958,
-0.0082... |
https://github.com/huggingface/datasets/issues/153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | Looks good but why would there be a difference between the ref in the source and the one to be printed? | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl... | 21 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly r... | [
0.3710707724,
-0.1895543188,
0.0191333164,
0.1910195351,
0.2161493897,
-0.2085571289,
0.1987184584,
0.1203199178,
-0.4126297832,
0.1728313267,
-0.0699001774,
0.2978675067,
0.5788614154,
-0.0941336751,
0.0515486598,
0.125373736,
0.0821741,
-0.0912784562,
0.0445880517,
-0.2442143... |
https://github.com/huggingface/datasets/issues/153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | Yes, I think we should remove this warning @mariamabarham.
It's probably a relic of tfds which didn't have the same way to access citations. | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl... | 24 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly r... | [
0.0700619072,
-0.144565925,
-0.0095196338,
0.015376335,
0.3223142922,
0.1147493124,
0.2942696512,
0.2142910361,
-0.0758147985,
0.2329560667,
-0.1995280683,
0.094711788,
0.2110273391,
-0.1058296561,
-0.024672173,
0.1326557249,
-0.094471477,
-0.0668340027,
0.006023264,
-0.0338111... |
https://github.com/huggingface/datasets/issues/149 | [Feature request] Add Ubuntu Dialogue Corpus dataset | @AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).
Maybe we can close this issue for now? | https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/ | 34 | [Feature request] Add Ubuntu Dialogue Corpus dataset
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual ... | [
-0.1457339525,
0.2197995931,
-0.065371111,
-0.0233407244,
0.1595496833,
0.2317843884,
0.2344496846,
0.1404035836,
-0.0935522839,
0.2077976167,
-0.1967510283,
0.2113595158,
0.0908538327,
-0.0198899116,
-0.0083213327,
-0.2836080194,
-0.0770923197,
-0.0565562546,
-0.0296265744,
-0... |
https://github.com/huggingface/datasets/issues/143 | ArrowTypeError in squad metrics | There was an issue in the format, thanks.
Now you can do
```python3
squad_dset = nlp.load_dataset("squad")
squad_metric = nlp.load_metric("/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad")
predictions = [
{"id": v["id"], "prediction_text": v["answers"]["text"][0]} # take first possible answer
for ... | `squad_metric.compute` is giving following error
```
ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
This is how my predictions and references lo... | 112 | ArrowTypeError in squad metrics
`squad_metric.compute` is giving following error
```
ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
This is ho... | [
0.0508039854,
0.0322120674,
-0.0459480435,
0.3747366965,
0.3514668643,
-0.1831917465,
0.1283159703,
0.2945097089,
-0.0273576658,
-0.0166042242,
0.0504659712,
0.7067572474,
-0.2432428449,
-0.1496465057,
-0.2859818041,
-0.1268265396,
-0.0782126337,
0.2369154841,
0.2221025229,
0.1... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.
| Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 39 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | Chiming in to second everything @honnibal said, and to add that I think the current name is going to impact the discoverability of this library. People who are looking for "NLP Datasets" through a search engine are going to see a library called `nlp` and think it's too broad. People who are looking to do NLP in python ... | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 143 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | I'm also not sure whether the naming of `nlp` is the problem itself, as long as it comes with the appropriate identifier, so maybe something like `huggingface_nlp`? This is analogous to what @honnibal and spacy are doing for `spacy-transformers`. Of course, this is a "step back" from the recent changes/renaming of tran... | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 66 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | Interesting, thanks for sharing your thoughts.
As we’ll move toward a first non-beta release, we will pool the community of contributors/users of the library for their opinions on a good final name (like when we renamed the beautifully (?) named `pytorch-pretrained-bert`)
In the meantime, using `from nlp import l... | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 53 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | I feel like we are conflating two distinct subjects here:
1. @honnibal's point is that using `nlp` as a package name might break existing code and bring developer usability issues in the future
2. @pmbaumgartner's point is that the `nlp` package name is too broad and shouldn't be used by a package that exposes only... | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 170 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | By the way, `nlp` will very likely not be only about datasets, and not even just about datasets and metrics.
I see it as a laboratory for testing several long-term ideas about how we could do NLP in terms of research as well as open-source and community sharing, most of these ideas being too experimental/big to fit ... | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 140 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | > If we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)
I'm sort of confused by your point here. The namespace *is* shared by variable names. You should not use local variables that are named the same as modules, because then you cannot use the modu... | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 265 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | Dropping by as I noticed that the library has been renamed `datasets` so I wonder if the conversation above is settled (`nlp` not used anymore) :) | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 26 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | I'd argue that `datasets` is worse than `nlp`. Datasets should be a user specific decision and not encapsulate all of python (`pip install datasets`). If this package contained every dataset in the world (NLP / vision / etc) then it would make sense =/ | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 44 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | I can't speak for the HF team @jramapuram, but as member of the community it looks to me that HF wanted to avoid the past path of changing names as scope broadened over time:
Remember
https://github.com/huggingface/pytorch-openai-transformer-lm
https://github.com/huggingface/pytorch-pretrained-BERT
https://github... | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 89 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/138 | Consider renaming to nld | Yea I see your point. However, wouldn't scoping solve the entire problem?
```python
import huggingface.datasets as D
import huggingface.transformers as T
```
Calling something `datasets` is akin to saying I'm going to name my package `python` --> `import python` | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | 39 | Consider renaming to nld
Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that confl... | [
0.3546045423,
0.0250588749,
-0.0843050182,
-0.3304427564,
0.1876238436,
-0.236012876,
0.28185305,
0.2058826387,
-0.1652064472,
0.0582918562,
0.2198298872,
0.2940490246,
-0.3030350804,
0.124629885,
0.3023299575,
-0.2793754339,
0.208622247,
0.0491359942,
0.163286671,
-0.12861453,... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | I second this request. The bottom line is that **scores produced with different reference tokenizations are not comparable**. To discourage (even inadvertent) cheating, the user should never touch the reference. The `v13a` tokenization standard is not ideal, but at least it has been consistently used at matrix.statmt.o... | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 74 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.2068959624,
-0.2239940464,
-0.0241724346,
-0.1960565895,
0.1427307278,
-0.2892453969,
0.2044683546,
0.242577523,
-0.5539252162,
0.4257796109,
-0.2530023754,
0.3626276255,
-0.0131676933,
-0.0787504688,
-0.1742029041,
0.1995100677,
0.1531563848,
0.1220689416,
0.5717876554,
-0.... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | Didn't we have a slide and discussion at WMT admitting that, for production-quality models, BLEU doesn't correlate with human eval anyway?
| https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 21 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.2491149306,
-0.3357779086,
-0.0347297043,
-0.3980178833,
0.186163336,
-0.223855868,
0.3552843928,
0.139535591,
-0.3892636597,
0.4466582835,
-0.0967211127,
0.2908812165,
-0.1527000517,
0.0279200096,
-0.1170514524,
0.204105854,
0.1373978704,
-0.0091412924,
0.6749676466,
-0.105... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | Yes, there are slides like that at WMT every year :) BLEU correlates with human judgment only at coarse levels, and it seems to be getting worse when people try to use it to do model selection among high-performing neural systems.
However, the point isn't whether BLEU is a good metric, but whether your BLEU score ca... | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 123 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.3135459721,
-0.2768906951,
-0.051837787,
-0.2917872965,
0.0934212729,
-0.2474936396,
0.2866831124,
0.2192000002,
-0.4277971983,
0.3128698766,
-0.207585901,
0.1104491577,
-0.1642136574,
-0.1205745637,
-0.1374368966,
0.1132957786,
0.1067670807,
-0.0139781153,
0.6649670601,
-0.... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | I do not consider as a sufficient solution switching this library's default metric from BLEU to the wrapper around SacreBLEU.
As currently implemented, the wrapper allows end users to toggle SacreBLEU options, but doesn't pass along the SacreBLEU signature. As @mjpost showed in [Post18](https://www.aclweb.org/antho... | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 137 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.2754621804,
-0.2523215413,
0.0150290253,
-0.3404410481,
0.2117930949,
-0.3596955538,
0.1853222698,
0.3614327013,
-0.3344032764,
0.3967299461,
0.0437218472,
0.3229064643,
-0.1908611357,
0.0206782371,
-0.1860466003,
0.0257370993,
0.0594055429,
-0.0886750519,
0.6912767291,
-0.0... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | Thanks for sharing your thoughts. This is a very important discussion.
Also one of the first items on our mid-term roadmap (we will try to clean it and share it soon) is to introduce mechanisms to get high-quality traceability and reproducibility for all the processes related to the library.
So having the signatu... | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 138 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.0818870068,
-0.1245273426,
-0.0124914898,
-0.2027842253,
0.1472495347,
-0.3798783123,
0.2327238768,
0.2167262584,
-0.3655439913,
0.4332472384,
-0.0330455415,
0.3025976121,
-0.1015590504,
-0.023105111,
-0.06225238,
0.0280107968,
0.0301388688,
-0.0420778692,
0.582521677,
-0.07... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | Yeah, I would love to have discussions about ways this project can have an community-based, transparent process to arrive at strong default metrics. @kpu / @mjpost do you have any suggestions of how that might work or pointers to places where this is done right? Perhaps this question can be template for what is likely ... | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 61 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.1317215562,
-0.1521495134,
0.0231625233,
-0.2831539512,
0.1835172772,
-0.4405021667,
0.1937045008,
0.0987427235,
-0.3281911016,
0.5617697835,
0.0751188993,
0.3197996616,
-0.0818994343,
0.058384046,
-0.1340551376,
0.2278726548,
-0.0086308271,
0.0542875193,
0.513055861,
-0.049... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | I think @bittlingmayer is referring to Figure 6 in http://statmt.org/wmt19/pdf/53/WMT02.pdf . When you look at Appendix A there are some cases where metrics fall apart at the high end and some where they correlate well. en-zh is arguably production-quality.
This could evolve into a metrics Bazaar where the value... | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 101 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.2435232103,
-0.3005849123,
-0.0543988273,
-0.2982111573,
0.0659076944,
-0.2633566558,
0.3223110735,
0.164746657,
-0.3544040024,
0.3798713982,
-0.0478950292,
0.2776408494,
0.0203533079,
0.0332898498,
-0.099128969,
0.1471894681,
0.0948517546,
0.0232937969,
0.4395559132,
-0.232... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | While a Bazaar setup works for models / datasets, I am not sure it is ideal for metrics ? Ideal from my perspective would be to have tasks with metrics moderated by experts who document, cite, and codify known pitchfalls (as above^) and make it non-trivial for beginners to mess it up. | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 52 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.2716113031,
-0.0951526389,
0.0032679953,
-0.3119654357,
0.0734551102,
-0.3363659084,
0.4298068583,
0.1751321852,
-0.1235071123,
0.4883821905,
-0.2467202395,
0.3768926859,
0.0635003746,
-0.0168899819,
-0.1135297492,
0.0980852842,
0.0930720046,
-0.0550510511,
0.3823446631,
-0.... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | @srush @thomwolf
ModelFront could provide (automated, "QE-based") evaluation for all the pretrained translation models you host. Not bottom-up and not valid for claiming SoTA, but independent, practical for builders and not top-down.
For that I would also suggest some diverse benchmarks (so split it out into da... | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 123 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.3010782003,
-0.3047811985,
-0.0611796752,
-0.3138871789,
0.1273147911,
-0.327370882,
0.3816561997,
0.1755339652,
-0.4096499681,
0.3841945827,
-0.1201143935,
0.1421946287,
-0.0175856464,
0.095563814,
-0.0426303595,
0.1114331409,
0.2059901655,
0.0186358374,
0.5584242344,
-0.15... |
https://github.com/huggingface/datasets/issues/137 | Tokenized BLEU considered harmful - Discussion on community-based process | Very important discussion.
I am trying to understand the effects of tokenization.
I wanted to ask which is a good practice.
Sacrebleu should be used on top of the tokenized output, or detokenized(raw) text? | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | 34 | Tokenized BLEU considered harmful - Discussion on community-based process
https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used b... | [
-0.0534890406,
-0.3386298418,
-0.0087705124,
-0.3742045462,
-0.0085887508,
-0.2285600603,
0.3205291629,
0.0761311278,
-0.4184635878,
0.2326126993,
0.000928638,
0.2679920495,
-0.0949825868,
0.0452179238,
-0.1583248526,
0.1368268877,
0.2894060016,
-0.1308309883,
0.7297713757,
0.0... |
https://github.com/huggingface/datasets/issues/133 | [Question] Using/adding a local dataset | Hi @zphang,
So you can just give the local path to a dataset script file and it should work.
Here is an example:
- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)
- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`
Does it make sense... | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
... | 55 | [Question] Using/adding a local dataset
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not ex... | [
-0.3090121448,
0.3447009623,
-0.0485155955,
-0.1978272945,
-0.0229687784,
0.0207582749,
0.188450098,
0.1586084515,
0.1726605892,
0.239925161,
0.0062492983,
0.3857522011,
0.0464573875,
0.2451038957,
0.3998592496,
0.0255310126,
-0.0244459249,
0.0700494573,
-0.1732784212,
-0.13732... |
https://github.com/huggingface/datasets/issues/133 | [Question] Using/adding a local dataset | Could you give a more concrete example, please?
I looked up wikitext dataset script from the repo. Should I just overwrite the `data_file` on line 98 to point to the local dataset directory? Would it work for different configurations of wikitext (wikitext2, wikitext103 etc.)?
Or maybe we can use DownloadManager ... | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
... | 65 | [Question] Using/adding a local dataset
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not ex... | [
-0.2011065185,
0.1899252981,
-0.0032562944,
-0.2242403626,
-0.1611983925,
0.0761197507,
0.1614467651,
0.0639025196,
0.3210600913,
0.2902062833,
0.1616270095,
0.3396967351,
0.0044045947,
0.1784156263,
0.3171546161,
0.0068331985,
0.0992786959,
0.0957336798,
-0.3213841319,
-0.0826... |
https://github.com/huggingface/datasets/issues/133 | [Question] Using/adding a local dataset | Hi @MaveriQ , although what I am doing is to commit a new dataset, but I think looking at imdb script might help.
You may want to use `dl_manager.download_custom`, give it a url(arbitrary string), a custom_download(arbitrary function) and return a path, and finally use _get sample to fetch a sample. | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
... | 50 | [Question] Using/adding a local dataset
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not ex... | [
-0.3848370016,
0.4261918664,
-0.0582929365,
-0.0622869618,
-0.1617801189,
-0.0290352255,
0.1196392849,
0.0627900288,
0.225906685,
0.2672969401,
-0.1065551341,
0.3037170172,
-0.1359495521,
0.2354282588,
0.3147618771,
0.0563902631,
-0.1727444977,
0.0608875453,
-0.2876502275,
-0.1... |
https://github.com/huggingface/datasets/issues/133 | [Question] Using/adding a local dataset | The download manager supports local directories. You can specify a local directory instead of a url and it should work. | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
... | 20 | [Question] Using/adding a local dataset
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not ex... | [
-0.3328150809,
0.3316178322,
-0.0937175155,
-0.1150166094,
-0.0208439771,
-0.01579546,
0.1370426416,
-0.0018575246,
0.2311637402,
0.2999221683,
-0.215514347,
0.3249566853,
-0.0286776833,
0.2468102574,
0.2563113272,
-0.0198876336,
-0.0942599028,
-0.0035388328,
-0.304269284,
-0.0... |
https://github.com/huggingface/datasets/issues/131 | [Feature request] Add Toronto BookCorpus dataset | As far as I understand, `wikitext` is refer to `WikiText-103` and `WikiText-2` that created by researchers in Salesforce, and mostly used in traditional language modeling.
You might want to say `wikipedia`, a dump from wikimedia foundation.
Also I would like to have Toronto BookCorpus too ! Though it involves cop... | I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT. | 51 | [Feature request] Add Toronto BookCorpus dataset
I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT.
As far as I understand, `wikitext` is refer to `WikiText-103` and ... | [
-0.1203032732,
-0.0427295789,
-0.0033753994,
-0.0371078402,
-0.0014095673,
0.2495967746,
0.2591406107,
0.078760162,
-0.1759212464,
0.1513455212,
-0.1105385423,
0.3519929349,
-0.2013262063,
0.3885523677,
0.2654998899,
-0.3297360539,
0.1974351704,
0.315286994,
0.1295727342,
-0.34... |
https://github.com/huggingface/datasets/issues/130 | Loading GLUE dataset loads CoLA by default | As a follow-up to this: It looks like the actual GLUE task name is supplied as the `name` argument. Is there a way to check what `name`s/sub-datasets are available under a grouping like GLUE? That information doesn't seem to be readily available in info from `nlp.list_datasets()`.
Edit: I found the info under `Glue.... | If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that they need to specify a task in GLUE. Should the... | 53 | Loading GLUE dataset loads CoLA by default
If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that th... | [
-0.1725945175,
-0.2484225184,
0.0658994317,
0.3159227669,
-0.0194631554,
0.0501955263,
0.376293987,
-0.1403567642,
0.7129030824,
0.1203572974,
-0.2811449766,
0.5193802714,
0.081803821,
0.0636687279,
0.2311221659,
0.0292404927,
0.0766590834,
0.1667408943,
-0.1115131229,
-0.18650... |
https://github.com/huggingface/datasets/issues/130 | Loading GLUE dataset loads CoLA by default | Yes so the first config is loaded by default when no `name` is supplied but for GLUE this should probably throw an error indeed.
We can probably just add an `__init__` at the top of the `class Glue(nlp.GeneratorBasedBuilder)` in the `glue.py` script which does this check:
```
class Glue(nlp.GeneratorBasedBuilder):... | If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that they need to specify a task in GLUE. Should the... | 75 | Loading GLUE dataset loads CoLA by default
If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that th... | [
-0.1694331914,
-0.1131426543,
0.1923100948,
0.1447666585,
0.1055871993,
-0.1608264297,
0.324606061,
-0.241668418,
0.4928925931,
0.2975999415,
0.0066171736,
0.5925444365,
0.0851709023,
0.0806322619,
0.1479725242,
0.087144196,
-0.0673542842,
0.3852502406,
-0.2322342098,
-0.119354... |
https://github.com/huggingface/datasets/issues/130 | Loading GLUE dataset loads CoLA by default | An error is raised if the sub-dataset is not specified :)
```
ValueError: Config name is missing.
Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']
Example of usage:
`load_dataset('glue', 'cola')`
``` | If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that they need to specify a task in GLUE. Should the... | 42 | Loading GLUE dataset loads CoLA by default
If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that th... | [
-0.1234982461,
-0.2316409796,
0.1299612671,
0.1788024604,
-0.0260566641,
0.0349778533,
0.447948724,
-0.2386649698,
0.6000367999,
0.1970046908,
-0.1358671784,
0.5252869129,
0.1504835486,
0.1636871397,
0.1363488734,
-0.0717557967,
0.0006627274,
0.1845783442,
-0.2312727422,
-0.148... |
https://github.com/huggingface/datasets/issues/129 | [Feature request] Add Google Natural Question dataset | Still work in progress :)
The idea is to have the dataset already processed somewhere so that the user only have to download the processed files. I'm also doing it for wikipedia. | Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD. | 32 | [Feature request] Add Google Natural Question dataset
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
Still work in progress :)
The idea is to have the dataset already processed somewhere so that the user only have to download the processed files. I'... | [
-0.1264828146,
0.1183222458,
-0.2279913276,
-0.2916734815,
-0.0624215007,
0.2136780471,
0.2895943224,
0.1539699882,
-0.0013704619,
0.1868421286,
-0.0968557373,
0.3380505741,
-0.2951550484,
0.1478503644,
0.5057771206,
0.0203669127,
0.1696390659,
0.0553917214,
0.2871236205,
-0.09... |
https://github.com/huggingface/datasets/issues/129 | [Feature request] Add Google Natural Question dataset | Super appreciate your hard work !!
I'll cross my fingers and hope easily loadable wikipedia dataset will come soon. | Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD. | 19 | [Feature request] Add Google Natural Question dataset
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
Super appreciate your hard work !!
I'll cross my fingers and hope easily loadable wikipedia dataset will come soon. | [
-0.2017223835,
0.125656113,
-0.2436525077,
-0.1397189796,
-0.0242803134,
0.1866628677,
0.3122389615,
0.2764436305,
0.2076870054,
0.1970581859,
-0.171297729,
0.4047229588,
-0.2165551484,
0.0521144792,
0.4223890007,
-0.0738181099,
0.0963117182,
0.0344125703,
0.1412131041,
-0.0688... |
https://github.com/huggingface/datasets/issues/129 | [Feature request] Add Google Natural Question dataset | Quick update on NQ: due to some limitations I met using apache beam + parquet I was not able to use the dataset in a nested parquet structure in python to convert it to our Apache Arrow format yet.
However we had planned to change this conversion step anyways so we'll make just sure that it enables to process and conv... | Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD. | 66 | [Feature request] Add Google Natural Question dataset
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
Quick update on NQ: due to some limitations I met using apache beam + parquet I was not able to use the dataset in a nested parquet structure in pyth... | [
-0.2953701317,
0.2493466586,
-0.2129655033,
-0.2288989574,
-0.2177683264,
-0.0077631567,
0.1600125134,
0.2642011344,
0.0725729913,
0.2330665737,
-0.1219621673,
0.4617760479,
-0.3594292998,
0.3047492504,
0.6277963519,
-0.1398962885,
0.1091499552,
0.1171830446,
0.1310337484,
0.00... |
https://github.com/huggingface/datasets/issues/128 | Some error inside nlp.load_dataset() | Google colab has an old version of Apache Arrow built-in.
Be sure you execute the "pip install" cell and restart the notebook environment if the colab asks for it. | First of all, nice work!
I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb)
In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')`
I get an error, which is connected with some inner code, I think:
`... | 29 | Some error inside nlp.load_dataset()
First of all, nice work!
I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb)
In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')`
I get an error, which is conn... | [
-0.0670082048,
0.0724995136,
-0.0823870823,
0.1897586584,
0.2675674558,
0.0750563815,
0.2924348116,
0.3888992667,
0.0351886526,
-0.1877865344,
-0.1114237607,
0.2747960687,
-0.0162892137,
0.064321056,
0.2407229692,
-0.2864511907,
-0.1546425372,
0.2549159825,
-0.0124720428,
0.123... |
https://github.com/huggingface/datasets/issues/120 | 🐛 `map` not working | I didn't assign the output 🤦♂️
```python
dataset.map(test)
```
should be :
```python
dataset = dataset.map(test)
``` | I'm trying to run a basic example (mapping function to add a prefix).
[Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing)
```python
import nlp
dataset = nlp.load_dataset('squad', split='validation[:10%]')
def test(sample):
samp... | 17 | 🐛 `map` not working
I'm trying to run a basic example (mapping function to add a prefix).
[Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing)
```python
import nlp
dataset = nlp.load_dataset('squad', split='validation[:10%]')
def ... | [
-0.1308910251,
-0.288844496,
-0.0395004712,
-0.0892403573,
0.1618370116,
0.0317667834,
0.0189051535,
0.2528118789,
0.3727516234,
0.1373796165,
0.23242037,
0.6317355037,
-0.0919270664,
0.101572223,
0.1246008798,
0.0128666703,
0.1166425422,
0.2223275304,
-0.0682750568,
-0.0151178... |
https://github.com/huggingface/datasets/issues/119 | 🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' | It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :
```python
import pyarrow
!pip show pyarrow
print("version = {}".format(pyarrow.__version__))
```
> Name: pyarrow
Version: 0.17.0
Summary: Python library for Apache Arrow
Home-page: https://arr... | I'm trying to load CNN/DM dataset on Colab.
[Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)
But I meet this error :
> AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
| 63 | 🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
I'm trying to load CNN/DM dataset on Colab.
[Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)
But I meet this error :
> AttributeError: type object 'pyarrow.lib.RecordBa... | [
-0.22734119,
-0.0230960008,
-0.0340330563,
0.1723941565,
0.0568677299,
-0.0996635258,
0.2892406881,
0.1857036352,
-0.0378172733,
0.0181548689,
0.0521484353,
0.5537127852,
-0.2503231764,
0.04018135,
0.0903166234,
-0.0751873404,
0.0834169835,
0.2587282062,
0.1037928537,
-0.127767... |
https://github.com/huggingface/datasets/issues/119 | 🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' | Ok I just had to restart the runtime after installing `nlp`. After restarting, the version of `pyarrow` is fine. | I'm trying to load CNN/DM dataset on Colab.
[Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)
But I meet this error :
> AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
| 19 | 🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
I'm trying to load CNN/DM dataset on Colab.
[Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)
But I meet this error :
> AttributeError: type object 'pyarrow.lib.RecordBa... | [
-0.2835428715,
0.044781588,
0.0172027331,
0.226200223,
0.1537765563,
-0.0492426232,
0.2736165524,
0.1078586802,
-0.0629181787,
0.0287807696,
0.0899543539,
0.4489679933,
-0.3376853466,
0.2220074683,
0.2667759955,
-0.1032790318,
0.040154621,
0.3651624322,
0.1162212268,
-0.0348989... |
https://github.com/huggingface/datasets/issues/116 | 🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 | Sure, [here is a Colab notebook](https://colab.research.google.com/drive/1uiS89fnHMG7HV_cYxp3r-_LqJQvNNKs9?usp=sharing) reproducing the error.
> ArrowInvalid: Column 1 named references expected length 36 but got length 56 | I'm trying to use rouge metric.
I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence.
I tried :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g:
for lp, lg in zip(p, g):
... | 22 | 🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
I'm trying to use rouge metric.
I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence.
I tried :
```python
import nlp
rouge = nlp.loa... | [
-0.0459096767,
0.1151691675,
-0.0150177768,
0.2721494138,
0.2404059023,
-0.263548255,
0.0431785174,
0.2833087742,
-0.4245962203,
0.2897651792,
-0.1165680662,
0.613515079,
0.002956209,
-0.4883935153,
0.0284865648,
-0.1368684471,
-0.0081864297,
0.2913642526,
0.4083459675,
0.10401... |
https://github.com/huggingface/datasets/issues/116 | 🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 | This is because `add` takes as input a batch of elements and you provided only one. I think we should have `add` for one prediction/reference and `add_batch` for a batch of predictions/references. This would make it more coherent with the way we use Arrow.
Let me do this change | I'm trying to use rouge metric.
I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence.
I tried :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g:
for lp, lg in zip(p, g):
... | 49 | 🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
I'm trying to use rouge metric.
I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence.
I tried :
```python
import nlp
rouge = nlp.loa... | [
-0.0529838912,
0.1234705225,
-0.0156167932,
0.2224007249,
0.2445524037,
-0.2058068216,
0.0348035246,
0.2737763524,
-0.2766896486,
0.2284604907,
-0.0470919088,
0.5116130114,
-0.0322656557,
-0.5062630177,
0.1793057323,
-0.2050361782,
0.0496256463,
0.2373667359,
0.376671046,
0.005... |
https://github.com/huggingface/datasets/issues/115 | AttributeError: 'dict' object has no attribute 'info' | I could access the info by first accessing the different splits :
```python
import nlp
cnn_dm = nlp.load_dataset('cnn_dailymail')
print(cnn_dm['train'].info)
```
Information seems to be duplicated between the subsets :
```python
print(cnn_dm["train"].info == cnn_dm["test"].info == cnn_dm["validation"].i... | I'm trying to access the information of CNN/DM dataset :
```python
cnn_dm = nlp.load_dataset('cnn_dailymail')
print(cnn_dm.info)
```
returns :
> AttributeError: 'dict' object has no attribute 'info' | 42 | AttributeError: 'dict' object has no attribute 'info'
I'm trying to access the information of CNN/DM dataset :
```python
cnn_dm = nlp.load_dataset('cnn_dailymail')
print(cnn_dm.info)
```
returns :
> AttributeError: 'dict' object has no attribute 'info'
I could access the info by first accessing the diff... | [
0.1249499768,
-0.4288876057,
-0.0755955502,
0.6906058192,
0.0929053873,
0.0969588384,
0.2694359124,
0.0933502391,
-0.0262439493,
0.3825283647,
-0.1993027925,
0.2700486481,
-0.0093514975,
0.1719217002,
-0.0433853157,
-0.210740611,
-0.1371985674,
0.1066758111,
0.073227182,
-0.267... |
https://github.com/huggingface/datasets/issues/115 | AttributeError: 'dict' object has no attribute 'info' | Good point @Colanim ! What happens under the hood when running:
```python
import nlp
cnn_dm = nlp.load_dataset('cnn_dailymail')
```
is that for every split in `cnn_dailymail`, a different dataset object (which all holds the same info) is created. This has the advantages that the datasets are easily separable... | I'm trying to access the information of CNN/DM dataset :
```python
cnn_dm = nlp.load_dataset('cnn_dailymail')
print(cnn_dm.info)
```
returns :
> AttributeError: 'dict' object has no attribute 'info' | 115 | AttributeError: 'dict' object has no attribute 'info'
I'm trying to access the information of CNN/DM dataset :
```python
cnn_dm = nlp.load_dataset('cnn_dailymail')
print(cnn_dm.info)
```
returns :
> AttributeError: 'dict' object has no attribute 'info'
Good point @Colanim ! What happens under the hood w... | [
0.0860446692,
-0.2287645191,
-0.0386711992,
0.5566274524,
0.2393110245,
0.082007058,
0.2878682911,
0.2890191376,
-0.0095847547,
0.1997733116,
-0.0262701139,
0.3800337911,
-0.2866001427,
0.3110383153,
-0.1329597831,
-0.3128982782,
-0.0617806241,
0.0819351077,
0.0685541853,
-0.18... |
https://github.com/huggingface/datasets/issues/38 | [Checksums] Error for some datasets | Fixed with 06882b4
Now your command works :)
Note that you can also do
```
nlp-cli test datasets/nlp/xnli --save_checksums
```
So that it will save the checksums directly in the right directory. | The checksums command works very nicely for `squad`. But for `crime_and_punish` and `xnli`,
the same bug happens:
When running:
```
python nlp-cli nlp-cli test xnli --save_checksums
```
leads to:
```
File "nlp-cli", line 33, in <module>
service.run()
File "/home/patrick/python_bin/nlp/commands... | 32 | [Checksums] Error for some datasets
The checksums command works very nicely for `squad`. But for `crime_and_punish` and `xnli`,
the same bug happens:
When running:
```
python nlp-cli nlp-cli test xnli --save_checksums
```
leads to:
```
File "nlp-cli", line 33, in <module>
service.run()
File ... | [
-0.0627012625,
0.4912027419,
0.0221911091,
0.0352558307,
0.1421763003,
-0.106048584,
0.2264273316,
0.5860052705,
0.0751722381,
-0.0241698287,
-0.1206225008,
0.3043448925,
0.1317292899,
-0.1162972301,
-0.0193385985,
0.2671162486,
0.1346077025,
0.113117516,
0.1048001423,
-0.10833... |
https://github.com/huggingface/datasets/issues/6 | Error when citation is not given in the DatasetInfo | Yes looks good to me.
Note that we may refactor quite strongly the `info.py` to make it a lot simpler (it's very complicated for basically a dictionary of info I think) | The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__
citation_pprint = _indent('"""{}"""'.format(self.... | 31 | Error when citation is not given in the DatasetInfo
The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__
... | [
0.2855115235,
0.0732320994,
0.1477970779,
0.1601160914,
0.28277722,
0.2352982759,
0.2543575466,
0.4805192351,
-0.4466490746,
0.1662403196,
0.4036864936,
0.3917038143,
0.0535712391,
-0.2033304572,
-0.0758437067,
-0.3508138061,
-0.1062088683,
0.2190295756,
0.3720495701,
-0.052251... |
https://github.com/huggingface/datasets/issues/5 | ValueError when a split is empty | To fix this I propose to modify only the file `arrow_reader.py` with few updates. First update, the following method:
```python
def _make_file_instructions_from_absolutes(
name,
name2len,
absolute_instructions,
):
"""Returns the files instructions from the absolute instructions list."... | When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs)
File "/home/jplu/dev/jplu/data... | 379 | ValueError when a split is empty
When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs... | [
-0.2972345054,
-0.0131540755,
-0.1030680835,
0.293113023,
0.0207598414,
-0.0219011754,
0.6005368829,
0.47498402,
-0.0488540046,
0.4204657972,
0.0286983997,
0.2306585461,
-0.3276995718,
0.1830052733,
-0.4715830386,
-0.2210289836,
-0.062096972,
0.403324753,
0.0316318534,
0.002202... |
https://github.com/huggingface/datasets/issues/5 | ValueError when a split is empty | Yes sounds good to me!
Do you want to make a PR? or I can do it as well | When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs)
File "/home/jplu/dev/jplu/data... | 19 | ValueError when a split is empty
When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs... | [
-0.2972345054,
-0.0131540755,
-0.1030680835,
0.293113023,
0.0207598414,
-0.0219011754,
0.6005368829,
0.47498402,
-0.0488540046,
0.4204657972,
0.0286983997,
0.2306585461,
-0.3276995718,
0.1830052733,
-0.4715830386,
-0.2210289836,
-0.062096972,
0.403324753,
0.0316318534,
0.002202... |
https://github.com/huggingface/datasets/issues/4 | [Feature] Keep the list of labels of a dataset as metadata | Yes! I see mostly two options for this:
- a `Feature` approach like currently (but we might deprecate features)
- wrapping in a smart way the Dictionary arrays of Arrow: https://arrow.apache.org/docs/python/data.html?highlight=dictionary%20encode#dictionary-arrays | It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | 31 | [Feature] Keep the list of labels of a dataset as metadata
It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata.
Yes! I see mostly two options for this:
- a `Feature` approach like currently (but we might deprecate features)
- wr... | [
0.1531038284,
-0.0619826838,
-0.1409769952,
0.0409963802,
0.1746007353,
0.1912367493,
0.2029763162,
0.2303461134,
-0.0681682974,
0.0453066081,
-0.0701302215,
0.5808178186,
-0.3312078118,
0.1982668191,
0.0061327382,
-0.1212564111,
-0.0128118051,
-0.027239684,
0.0372482091,
-0.05... |
https://github.com/huggingface/datasets/issues/4 | [Feature] Keep the list of labels of a dataset as metadata | This should be accessible now as a feature in dataset.info.features (and even have the mapping methods). | It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | 16 | [Feature] Keep the list of labels of a dataset as metadata
It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata.
This should be accessible now as a feature in dataset.info.features (and even have the mapping methods). | [
-0.0496051498,
-0.0984713584,
-0.1723788828,
-0.0273313224,
0.1680131108,
0.1654706597,
0.3242642283,
0.2305075675,
-0.0478425734,
0.2778838873,
-0.2302711904,
0.4928883612,
-0.1844333112,
0.2717520595,
-0.0722043812,
0.0264096037,
-0.1009540185,
0.0530565232,
0.041412428,
-0.0... |
https://github.com/huggingface/datasets/issues/4 | [Feature] Keep the list of labels of a dataset as metadata | Hi,
I hope we could get a better documentation.
It took me more than 1 hour to found this way to get the label information. | It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | 25 | [Feature] Keep the list of labels of a dataset as metadata
It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata.
Hi,
I hope we could get a better documentation.
It took me more than 1 hour to found this way to get the label infor... | [
-0.0012371235,
-0.164871186,
-0.1444353461,
0.0895125195,
0.1662276089,
0.1849586815,
0.3181915879,
0.0862609893,
-0.0790703148,
0.3192558289,
-0.1647142768,
0.5471565127,
-0.2006158084,
0.2491293848,
-0.1022705436,
0.0059490539,
-0.1573646963,
0.0784157217,
0.1580573916,
-0.12... |
https://github.com/huggingface/datasets/issues/4 | [Feature] Keep the list of labels of a dataset as metadata | Yes we are working on the doc right now, should be in the next release quite soon. | It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | 17 | [Feature] Keep the list of labels of a dataset as metadata
It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata.
Yes we are working on the doc right now, should be in the next release quite soon. | [
0.0434393957,
-0.0362069793,
-0.1563859284,
-0.0524543561,
0.1223034486,
0.220848009,
0.2759692073,
0.178853482,
0.0087437415,
0.2267383635,
-0.1574053168,
0.5626071095,
-0.2707874179,
0.2366515547,
-0.0232620221,
0.0165952444,
-0.0474640504,
0.020963395,
0.1306463331,
-0.03368... |
https://github.com/huggingface/datasets/issues/3 | [Feature] More dataset outputs | Yes!
- pandas will be a one-liner in `arrow_dataset`: https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.to_pandas
- for Spark I have no idea. let's investigate that at some point | Add the following dataset outputs:
- Spark
- Pandas | 23 | [Feature] More dataset outputs
Add the following dataset outputs:
- Spark
- Pandas
Yes!
- pandas will be a one-liner in `arrow_dataset`: https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.to_pandas
- for Spark I have no idea. let's investigate that at some point | [
-0.4236602783,
-0.0618052967,
-0.250079155,
0.0073684147,
0.3025445044,
0.1241440102,
0.2948099673,
0.370973289,
-0.1464987546,
0.016603265,
-0.2502678335,
0.7553215623,
-0.0170541443,
0.3141037226,
0.2778238654,
-0.1931229979,
0.094296366,
0.1060331017,
-0.2115755081,
-0.06620... |
https://github.com/huggingface/datasets/issues/3 | [Feature] More dataset outputs | For Spark it looks to be pretty straightforward as well https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html but looks to be having a dependency to Spark is necessary, then nevermind we can skip it | Add the following dataset outputs:
- Spark
- Pandas | 28 | [Feature] More dataset outputs
Add the following dataset outputs:
- Spark
- Pandas
For Spark it looks to be pretty straightforward as well https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html but looks to be having a dependency to Spark is necessary, then nevermind we can skip it | [
-0.5538603067,
-0.1469929814,
-0.2703718543,
0.060532406,
0.2588167191,
0.1869446337,
0.2700171471,
0.4753180742,
0.0849914625,
0.1245463043,
-0.0857297108,
0.6752995253,
-0.1100905165,
0.399535805,
0.0250176676,
-0.2358332276,
-0.1084426045,
-0.0002743146,
-0.2984127402,
0.029... |
https://github.com/huggingface/datasets/issues/2 | Issue to read a local dataset | Ok, there are some news, most good than bad :laughing:
The dataset script now became:
```python
import csv
import nlp
class Bbc(nlp.GeneratorBasedBuilder):
VERSION = nlp.Version("1.0.0")
def __init__(self, **config):
self.train = config.pop("train", None)
self.validation = co... | Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwarg... | 353 | Issue to read a local dataset
Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):... | [
-0.271961689,
0.1254661232,
-0.0766851977,
0.2568113208,
0.3318960369,
0.0312694162,
0.2642412484,
0.3003318012,
0.3266818225,
0.2098685652,
0.1191099212,
0.2447595596,
-0.2942000926,
0.122020781,
0.1568033695,
-0.1723629832,
-0.1275068969,
0.3326410651,
0.1501984596,
-0.366157... |
https://github.com/huggingface/datasets/issues/2 | Issue to read a local dataset | Ok great, so as discussed today, let's:
- have a main dataset directory inside the lib with sub-directories hashed by the content of the file
- keep a cache for downloading the scripts from S3 for now
- later: add methods to list and clean the local versions of the datasets (and the distant versions on S3 as well)
... | Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwarg... | 88 | Issue to read a local dataset
Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):... | [
-0.271961689,
0.1254661232,
-0.0766851977,
0.2568113208,
0.3318960369,
0.0312694162,
0.2642412484,
0.3003318012,
0.3266818225,
0.2098685652,
0.1191099212,
0.2447595596,
-0.2942000926,
0.122020781,
0.1568033695,
-0.1723629832,
-0.1275068969,
0.3326410651,
0.1501984596,
-0.366157... |
https://github.com/huggingface/datasets/issues/2 | Issue to read a local dataset | Good plan!
Yes I do use `builder_kwargs` for other things such as:
- dataset name
- properties to know how to properly read a CSV file: do I have to skip the first line in a CSV, which delimiter is used, and the columns ids to use.
- properties to know how to properly read a JSON file: which properties in a JSON ... | Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwarg... | 66 | Issue to read a local dataset
Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):... | [
-0.271961689,
0.1254661232,
-0.0766851977,
0.2568113208,
0.3318960369,
0.0312694162,
0.2642412484,
0.3003318012,
0.3266818225,
0.2098685652,
0.1191099212,
0.2447595596,
-0.2942000926,
0.122020781,
0.1568033695,
-0.1723629832,
-0.1275068969,
0.3326410651,
0.1501984596,
-0.366157... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.