html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Yes you did it right.
Did you rebase to include the changes of #828 ?
EDIT: looks like you merged from master in the PR. Not sure why you still have an issue then, I will investigate | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 37 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.2472245246,
-0.0484530441,
-0.0161761995,
0.2615477145,
0.1814281195,
0.202324003,
0.1410057545,
0.2836433649,
-0.0936913937,
0.2016250044,
0.4972397387,
0.1490257382,
-0.2346244454,
0.0067336364,
-0.0158065781,
-0.3033438623,
0.0866440013,
0.055245664,
-0.14935112,
-0.02515... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Sorry for the delay, I was busy with the dataset sprint and the incredible amount of contributions to the library ^^'
What you can try to do to find what's wrong is check at which frequency the arrow writer writes all the examples from its in-memory buffer on disk. This happens [here](https://github.com/huggingface/... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 128 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.2472245246,
-0.0484530441,
-0.0161761995,
0.2615477145,
0.1814281195,
0.202324003,
0.1410057545,
0.2836433649,
-0.0936913937,
0.2016250044,
0.4972397387,
0.1490257382,
-0.2346244454,
0.0067336364,
-0.0158065781,
-0.3033438623,
0.0866440013,
0.055245664,
-0.14935112,
-0.02515... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | I had the same issue. It works for me by setting `DEFAULT_WRITER_BATCH_SIZE = 10` of my dataset builder class. (And not `_writer_batch_size` as previously mentioned). I guess this is because `_writer_batch_size` is overwritten in `__init__` (see [here](https://github.com/huggingface/datasets/blob/0e2563e5d5c2fc193ea27... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 37 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.2472245246,
-0.0484530441,
-0.0161761995,
0.2615477145,
0.1814281195,
0.202324003,
0.1410057545,
0.2836433649,
-0.0936913937,
0.2016250044,
0.4972397387,
0.1490257382,
-0.2346244454,
0.0067336364,
-0.0158065781,
-0.3033438623,
0.0866440013,
0.055245664,
-0.14935112,
-0.02515... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Yes the class attribute you can change is `DEFAULT_WRITER_BATCH_SIZE`.
Otherwise in `load_dataset` you can specify `writer_batch_size=` | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 16 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.2472245246,
-0.0484530441,
-0.0161761995,
0.2615477145,
0.1814281195,
0.202324003,
0.1410057545,
0.2836433649,
-0.0936913937,
0.2016250044,
0.4972397387,
0.1490257382,
-0.2346244454,
0.0067336364,
-0.0158065781,
-0.3033438623,
0.0866440013,
0.055245664,
-0.14935112,
-0.02515... |
https://github.com/huggingface/datasets/issues/737 | Trec Dataset Connection Error | Thanks for reporting.
That's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.
I'm opening a PR to update the url | **Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/... | 34 | Trec Dataset Connection Error
**Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn'... | [
-0.2436393648,
0.1074595153,
-0.0149687706,
0.1505167335,
0.3800679147,
-0.1288011372,
0.2745416462,
0.3322345912,
-0.2046610564,
0.116294913,
-0.1912940443,
0.2581111193,
0.0722814873,
-0.1658233851,
-0.1021717787,
0.0497930571,
-0.1717510372,
0.0062031238,
-0.2832171321,
0.08... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | Thanks for reporting. That's a bug indeed.
Apparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`) | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 35 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... | [
0.0632610768,
-0.1689580232,
-0.1356959194,
0.3066905141,
0.3663585782,
-0.082266517,
0.21097067,
0.1932277232,
-0.0809331015,
0.1459649354,
-0.0522679836,
0.0610482916,
0.177519694,
-0.0559846424,
0.1316589862,
0.2184452862,
0.0873200074,
0.066333048,
-0.1263158917,
-0.2109901... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 63 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... | [
0.0632610768,
-0.1689580232,
-0.1356959194,
0.3066905141,
0.3663585782,
-0.082266517,
0.21097067,
0.1932277232,
-0.0809331015,
0.1459649354,
-0.0522679836,
0.0610482916,
0.177519694,
-0.0559846424,
0.1316589862,
0.2184452862,
0.0873200074,
0.066333048,
-0.1263158917,
-0.2109901... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi try, to provide more information please.
Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version). | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 38 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597374,
0.0389879905,
-0.0594883524,
0.3662637472,
0.2443766296,
-0.0337876752,
0.1365439445,
0.550493896,
-0.0006113529,
-0.0602815896,
-0.0666541159,
-0.0310110059,
0.023809107,
0.3648073375,
-0.2299018502,
0.2352986932,
-0.1055582315,
0.0833996087,
-0.3995416462,
-0.0... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | > Hi try, to provide more information please.
>
> Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).
I have update the description, sorry for the incomplete issue by mistake. | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 53 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597374,
0.0389879905,
-0.0594883524,
0.3662637472,
0.2443766296,
-0.0337876752,
0.1365439445,
0.550493896,
-0.0006113529,
-0.0602815896,
-0.0666541159,
-0.0310110059,
0.023809107,
0.3648073375,
-0.2299018502,
0.2352986932,
-0.1055582315,
0.0833996087,
-0.3995416462,
-0.0... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi, I have manually downloaded the compressed dataset `openwebtext.tar.xz' and use the following command to preprocess the examples:
```
>>> dataset = load_dataset('/home/admin/workspace/datasets/datasets-master/datasets-master/datasets/openwebtext', data_dir='/home/admin/workspace/datasets')
Using custom data confi... | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 87 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597374,
0.0389879905,
-0.0594883524,
0.3662637472,
0.2443766296,
-0.0337876752,
0.1365439445,
0.550493896,
-0.0006113529,
-0.0602815896,
-0.0666541159,
-0.0310110059,
0.023809107,
0.3648073375,
-0.2299018502,
0.2352986932,
-0.1055582315,
0.0833996087,
-0.3995416462,
-0.0... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | NonMatchingChecksumError: Checksums didn't match for dataset source files:
i got this issue when i try to work on my own datasets kindly tell me, from where i can get checksums of train and dev file in my github repo | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 39 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597374,
0.0389879905,
-0.0594883524,
0.3662637472,
0.2443766296,
-0.0337876752,
0.1365439445,
0.550493896,
-0.0006113529,
-0.0602815896,
-0.0666541159,
-0.0310110059,
0.023809107,
0.3648073375,
-0.2299018502,
0.2352986932,
-0.1055582315,
0.0833996087,
-0.3995416462,
-0.0... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi, I got the similar issue for xnli dataset while working on colab with python3.7.
`nlp.load_dataset(path = 'xnli')`
The above command resulted in following issue :
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']
```... | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 44 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597374,
0.0389879905,
-0.0594883524,
0.3662637472,
0.2443766296,
-0.0337876752,
0.1365439445,
0.550493896,
-0.0006113529,
-0.0602815896,
-0.0666541159,
-0.0310110059,
0.023809107,
0.3648073375,
-0.2299018502,
0.2352986932,
-0.1055582315,
0.0833996087,
-0.3995416462,
-0.0... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Says fixed but I'm still getting it.
command:
dataset = load_dataset("ted_talks_iwslt", language_pair=("en", "es"), year="2014",download_mode="force_redownload")
got:
Using custom data configuration en_es_2014-35a2d3350a0f9823
Downloading and preparing dataset ted_talks_iwslt/en_es_2014 (download: 2.15 K... | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 52 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597374,
0.0389879905,
-0.0594883524,
0.3662637472,
0.2443766296,
-0.0337876752,
0.1365439445,
0.550493896,
-0.0006113529,
-0.0602815896,
-0.0666541159,
-0.0310110059,
0.023809107,
0.3648073375,
-0.2299018502,
0.2352986932,
-0.1055582315,
0.0833996087,
-0.3995416462,
-0.0... |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | Should be fixed now:

Not sure I understand what you mean by the second part?
| It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | 16 | need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked... | [
0.2948437333,
0.1105566025,
-0.0633751824,
0.109463349,
-0.071327582,
0.0353954472,
-0.2602992356,
0.6290496588,
-0.2917530239,
-0.4541853368,
-0.1458731592,
0.2195107639,
0.2947606742,
0.0850116163,
0.1424256563,
-0.3171723485,
0.0133650415,
0.0606416278,
-0.0077776127,
-0.108... |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | Thank you!
> Not sure I understand what you mean by the second part?
Compare the 2:
* https://huggingface.co/datasets/wikihow
* https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
Can you see the difference? 2nd has formatting, 1st doesn't.
| It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | 31 | need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked... | [
0.3347695172,
-0.0313108675,
-0.0245431308,
0.1223926023,
-0.114665024,
0.0453223921,
-0.3248900473,
0.6779083014,
-0.4332502782,
-0.5133857131,
-0.1581163853,
0.0308901668,
0.2777540386,
0.0370306261,
0.0912268832,
-0.3001835346,
0.0838998109,
0.0683063343,
0.1088941246,
-0.08... |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.
For the second one, we'll move to markdown parsing soon, so it'll be formatted better. | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | 56 | need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked... | [
0.1384782642,
0.0236902609,
-0.0979999155,
0.0289664585,
-0.0519955903,
0.0458978377,
-0.1316039562,
0.4961523712,
-0.1375632435,
-0.2072742283,
-0.1407679021,
0.3467581868,
0.2411794513,
0.0714615285,
0.1461841464,
-0.1521986574,
-0.0214512832,
0.1132966578,
-0.0513905324,
-0.... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | Nice ! :)
It's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.
Could you add details on what they could be used for ?
| I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 36 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.3508717716,
-0.1339655817,
0.0640572011,
0.0146744885,
0.1172191352,
0.0425937586,
0.24841474,
-0.0556927249,
-0.1454747915,
-0.0428761467,
-0.227577433,
0.0762313828,
-0.1649906486,
0.5450240374,
0.4980667233,
-0.1480553597,
0.3547057807,
-0.1910561621,
0.1890378147,
-0.1643... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | A new configuration for those datasets should do the job then.
Note that until now datasets like xsum only had one configuration. It means that users didn't have to specify the configuration name when loading the dataset. If we add new configs, users that update the lib will have to update their code to specify the de... | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 65 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.2998084128,
-0.1344269961,
0.1064767614,
-0.0421034507,
0.1416294426,
0.0385906808,
0.3353702426,
-0.0611249581,
-0.1213261262,
0.0122371726,
-0.2308754325,
0.1057623699,
-0.1279499233,
0.5004863739,
0.4318873882,
-0.0726465881,
0.3291834891,
-0.1554973423,
0.2026747018,
-0.1... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | Oh yes why not. I'm more in favor of this actually since pseudo labels are things that users (not dataset authors in general) can compute by themselves and share with the community | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 32 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.2814159393,
-0.0320713893,
0.0793316513,
-0.0085590864,
0.0833108276,
0.0303216875,
0.3540935218,
-0.1127623767,
-0.1846269518,
0.0258007497,
-0.2245691568,
0.0616266653,
-0.1164304316,
0.5070895553,
0.459064275,
-0.0578460135,
0.299020946,
-0.1764773875,
0.2383633554,
-0.169... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | 
I assume I should (for example) rename the xsum dir, change the URL, and put the modified dir somewhere in S3? | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 22 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.3824190199,
-0.0590514131,
0.0575784929,
0.0074142446,
0.1342786998,
0.0131382914,
0.2963198125,
-0.1217543483,
-0.2198836207,
-0.0435282253,
-0.2787756026,
0.0804837942,
-0.0839420632,
0.5122796297,
0.4707954526,
-0.0267683174,
0.3672173321,
-0.1611521989,
0.1646213979,
-0.1... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | You can use the `datasets-cli` to upload the folder with your version of xsum with the pseudo labels.
```
datasets-cli upload_dataset path/to/xsum
``` | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 23 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.3005567491,
-0.0865523219,
0.0789876357,
0.0127476919,
0.1379012316,
0.0425926782,
0.2965244055,
-0.0892709494,
-0.1311772317,
0.0045272023,
-0.2615564466,
0.1170213446,
-0.1018751785,
0.5030105114,
0.4647359848,
-0.0501750521,
0.3400837481,
-0.1612030566,
0.120938547,
-0.187... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | We only support http by default for downloading.
If you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the dataset... | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 120 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.2777805328,
0.0965270549,
-0.029432727,
0.2827263474,
-0.0488572642,
0.0626956001,
-0.2193705589,
0.4863720238,
0.2671811283,
0.0046165925,
-0.0645396262,
0.1767715365,
-0.133863762,
0.0276904572,
0.1833812147,
-0.3053059876,
-0.2551795542,
0.1110797599,
0.0832189023,
0.1619... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | Also maybe it coud be interesting to have a direct support of ftp inside the `datasets` library. Do you know any good libraries that we might consider adding as a (optional ?) dependency ? | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 34 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.3274756968,
0.0598702803,
-0.0164928474,
0.2534116209,
-0.0806324258,
0.0911127105,
-0.3827838898,
0.4749821424,
0.2600121796,
-0.0128854541,
0.0683744475,
0.0932434127,
-0.1218971983,
0.061172016,
0.2366615236,
-0.2707271874,
-0.1249642074,
0.1676851809,
0.0703391507,
0.157... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | Downloading an `ftp` file is as simple as:
```python
import urllib
urllib.urlretrieve('ftp://server/path/to/file', 'file')
```
I believe this should be supported by the library, as its not using any dependency and is trivial amount of code. | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 35 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.2455562949,
-0.005682548,
0.0114010051,
0.268014729,
-0.0810397267,
0.056356933,
-0.3472586572,
0.4960229993,
0.2105844766,
0.0559276827,
0.0054221442,
0.1040770486,
-0.1624987125,
-0.00030075,
0.2173726708,
-0.3047148585,
-0.1787897944,
0.161074385,
0.1181761995,
0.15649066... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | I know its unorthodox, but I added `ftp` download support to `file_utils` in the same PR https://github.com/huggingface/datasets/pull/722
So its possible to understand the interaction of the download component with the ftp download ability | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 33 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.2079320699,
-0.0998931378,
-0.0202529989,
0.3010162711,
0.0044639125,
0.0384780839,
-0.3553129733,
0.3622785509,
0.1900349408,
-0.0035549207,
-0.0368137993,
0.0217453893,
0.0155007541,
-0.003892882,
0.2229103446,
-0.3022278547,
-0.1582735181,
0.0847348347,
0.0404508002,
0.14... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | @hoanganhpham1006 yes.
See pull request https://github.com/huggingface/datasets/pull/722 , it has a loader for this dataset, mostly ready.
There's one issue that delays it being merged - https://github.com/huggingface/datasets/issues/741 - regarding memory consumption. | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 30 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.2078182995,
-0.1362144053,
0.0223398153,
0.3684147,
-0.0426030159,
0.0941683054,
-0.4160579145,
0.4587391019,
0.246598646,
-0.0219785813,
-0.1048667505,
0.0549302362,
-0.1739568263,
0.0406178199,
0.2238925993,
-0.285359174,
-0.1095827222,
0.047097154,
0.0764120519,
0.1486930... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | The problem which I have now is that this dataset seems does not allow to download? Can you share it with me pls | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 23 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.2195909321,
-0.1120685786,
-0.0246193372,
0.3834895194,
-0.052603025,
0.1177398562,
-0.3828472197,
0.4139447808,
0.2322626114,
0.0223057419,
-0.0140954079,
0.0171898641,
-0.1022190526,
0.0319084711,
0.2101530731,
-0.2765794396,
-0.2073132694,
0.0704653487,
0.1246692911,
0.09... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | The dataset loader is not yet ready, because of that issue.
If you want to just download the dataset the old-fashioned way, just go to: https://www-i6.informatik.rwth-aachen.de/ftp/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz (the ftp link is now broken, and its available over https) | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 37 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.2821509838,
-0.0215411,
-0.0087811267,
0.2353302389,
-0.0750114471,
0.0881376565,
-0.2813770175,
0.4609620869,
0.2376454026,
0.0213105474,
0.0197127182,
0.0793239102,
-0.1120399684,
0.0288053975,
0.2074829042,
-0.2595874071,
-0.1745925397,
0.0500783026,
0.0860737562,
0.13925... |
https://github.com/huggingface/datasets/issues/720 | OSError: Cannot find data file when not using the dummy dataset in RAG | Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet.
```
99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock
```
```
--------... | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour... | 387 | OSError: Cannot find data file when not using the dummy dataset in RAG
## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set... | [
-0.2258687764,
-0.0487167984,
0.0123972688,
0.1052549779,
0.3706668019,
-0.0698539391,
0.332316637,
0.2692304254,
0.0155125251,
0.325091213,
-0.1394265443,
0.2176526636,
-0.063734889,
-0.3219818175,
-0.015681345,
0.0387511514,
0.0088648042,
0.1769445539,
-0.2071303129,
-0.22746... |
https://github.com/huggingface/datasets/issues/720 | OSError: Cannot find data file when not using the dummy dataset in RAG | An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors. | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour... | 20 | OSError: Cannot find data file when not using the dummy dataset in RAG
## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set... | [
-0.2258687764,
-0.0487167984,
0.0123972688,
0.1052549779,
0.3706668019,
-0.0698539391,
0.332316637,
0.2692304254,
0.0155125251,
0.325091213,
-0.1394265443,
0.2176526636,
-0.063734889,
-0.3219818175,
-0.015681345,
0.0387511514,
0.0088648042,
0.1769445539,
-0.2071303129,
-0.22746... |
https://github.com/huggingface/datasets/issues/709 | How to use similarity settings other then "BM25" in Elasticsearch index ? | Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration passed... | **QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
=... | 88 | How to use similarity settings other then "BM25" in Elasticsearch index ?
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:*... | [
-0.1099792421,
-0.6802698374,
-0.0450149886,
-0.0399368592,
-0.1662630886,
0.0516546145,
-0.1838180125,
0.2227956504,
0.4452563226,
0.1382720172,
-0.3413906693,
-0.1468973309,
0.0635204613,
-0.2089973986,
-0.359283179,
-0.0928409621,
0.0289196763,
0.0444089249,
0.1041621417,
0.... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly. | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 26 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806663454,
-0.0424090587,
-0.0314098261,
0.4377959967,
0.1005084068,
0.1263736933,
0.0136549324,
0.3071820438,
0.0914101377,
-0.0730988681,
-0.0662421212,
0.301128,
-0.0139891645,
-0.2742892206,
-0.0800374225,
0.0623482354,
0.2097817063,
0.0289246328,
-0.2366966456,
-0.1054... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | Thanks for the tip @thomwolf ! I did not see that flag in the docs. I'll try with that. | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 19 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806663454,
-0.0424090587,
-0.0314098261,
0.4377959967,
0.1005084068,
0.1263736933,
0.0136549324,
0.3071820438,
0.0914101377,
-0.0730988681,
-0.0662421212,
0.301128,
-0.0139891645,
-0.2742892206,
-0.0800374225,
0.0623482354,
0.2097817063,
0.0289246328,
-0.2366966456,
-0.1054... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | We should add it indeed and also maybe a specific section with all the tips for maximal speed. What do you think @lhoestq @SBrandeis @yjernite ? | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 26 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806663454,
-0.0424090587,
-0.0314098261,
0.4377959967,
0.1005084068,
0.1263736933,
0.0136549324,
0.3071820438,
0.0914101377,
-0.0730988681,
-0.0662421212,
0.301128,
-0.0139891645,
-0.2742892206,
-0.0800374225,
0.0623482354,
0.2097817063,
0.0289246328,
-0.2366966456,
-0.1054... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | By default the datasets loaded with `load_dataset` live on disk.
It's possible to load them in memory by using some transforms like `.map(..., keep_in_memory=True)`.
Small correction to @thomwolf 's comment above: currently we don't have the `keep_in_memory` parameter for `load_dataset` AFAIK but it would be nice t... | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 51 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806663454,
-0.0424090587,
-0.0314098261,
0.4377959967,
0.1005084068,
0.1263736933,
0.0136549324,
0.3071820438,
0.0914101377,
-0.0730988681,
-0.0662421212,
0.301128,
-0.0139891645,
-0.2742892206,
-0.0800374225,
0.0623482354,
0.2097817063,
0.0289246328,
-0.2366966456,
-0.1054... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | Great! Thanks a lot.
I did a test using `map(..., keep_in_memory=True)` and also a test using in-memory only data.
```python
features = dataset.map(tokenize, batched=True, remove_columns=dataset['train'].column_names)
features.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])
... | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 170 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806663454,
-0.0424090587,
-0.0314098261,
0.4377959967,
0.1005084068,
0.1263736933,
0.0136549324,
0.3071820438,
0.0914101377,
-0.0730988681,
-0.0662421212,
0.301128,
-0.0139891645,
-0.2742892206,
-0.0800374225,
0.0623482354,
0.2097817063,
0.0289246328,
-0.2366966456,
-0.1054... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | I am having the same issue here. When loading from memory I can get the GPU up to 70% util but when loading after mapping I can only get 40%.
In disk:
```
book_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')
book_corpus = book_corpus.map(encode, batc... | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 247 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806663454,
-0.0424090587,
-0.0314098261,
0.4377959967,
0.1005084068,
0.1263736933,
0.0136549324,
0.3071820438,
0.0914101377,
-0.0730988681,
-0.0662421212,
0.301128,
-0.0139891645,
-0.2742892206,
-0.0800374225,
0.0623482354,
0.2097817063,
0.0289246328,
-0.2366966456,
-0.1054... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | There is a way to increase the batches read from memory? or multiprocessed it? I think that one of two or it is reading with just 1 core o it is reading very small chunks from disk and left my GPU at 0 between batches | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 45 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806663454,
-0.0424090587,
-0.0314098261,
0.4377959967,
0.1005084068,
0.1263736933,
0.0136549324,
0.3071820438,
0.0914101377,
-0.0730988681,
-0.0662421212,
0.301128,
-0.0139891645,
-0.2742892206,
-0.0800374225,
0.0623482354,
0.2097817063,
0.0289246328,
-0.2366966456,
-0.1054... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks. | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 21 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806663454,
-0.0424090587,
-0.0314098261,
0.4377959967,
0.1005084068,
0.1263736933,
0.0136549324,
0.3071820438,
0.0914101377,
-0.0730988681,
-0.0662421212,
0.301128,
-0.0139891645,
-0.2742892206,
-0.0800374225,
0.0623482354,
0.2097817063,
0.0289246328,
-0.2366966456,
-0.1054... |
https://github.com/huggingface/datasets/issues/707 | Requirements should specify pyarrow<1 | @punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity. | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | 18 | Requirements should specify pyarrow<1
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having insta... | [
-0.2744112611,
-0.2090409994,
0.0152530558,
0.1800350249,
0.0473459177,
-0.0619746558,
0.0390898995,
0.1851370782,
-0.0446798131,
0.0569028407,
-0.0508330464,
0.2923358381,
-0.005038945,
0.0731779039,
0.1184345409,
-0.3647959232,
0.2140393406,
0.5875737071,
-0.383457005,
-0.039... |
https://github.com/huggingface/datasets/issues/707 | Requirements should specify pyarrow<1 | Hello @mathcass
1. I did fork the repository and clone the same on my local system.
2. Then learnt about how we can publish our package on pypi.org. Also, found some instructions on same in setup.py documentation.
3. Then I Perplexity document link that you shared above. I created a colab link from there keep ... | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | 103 | Requirements should specify pyarrow<1
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having insta... | [
-0.2086723149,
-0.2492388636,
0.0104707647,
0.1431630552,
-0.1351187378,
-0.1367650032,
0.0601092987,
0.1777838618,
-0.0722261295,
0.187683776,
-0.1349501312,
0.3921404183,
-0.0105914706,
0.2025153935,
0.0202153791,
-0.3189197183,
0.1558382511,
0.6317853928,
-0.2972457409,
-0.0... |
https://github.com/huggingface/datasets/issues/707 | Requirements should specify pyarrow<1 | Thanks for looking at this @punitaojha and thanks for sharing the notebook.
I just tried to reproduce this on my own (based on the environment where I had this issue) and I can't reproduce it somehow. If I run into this again, I'll include some steps to reproduce it. I'll close this as invalid.
Thanks again. | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | 56 | Requirements should specify pyarrow<1
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having insta... | [
-0.2600724101,
-0.2294464558,
0.0445462242,
0.2166787535,
0.074973233,
-0.0639310405,
-0.0022502295,
0.1691178232,
-0.0127057796,
0.0657430515,
0.020896988,
0.2901550233,
-0.0119957691,
0.0165848266,
0.1564210057,
-0.3912961185,
0.2354255319,
0.5927875638,
-0.3565094471,
-0.051... |
https://github.com/huggingface/datasets/issues/707 | Requirements should specify pyarrow<1 | I am sorry for hijacking this closed issue, but I believe I was able to reproduce this very issue. Strangely enough, it also turned out that running `pip install "pyarrow<1" --upgrade` did indeed fix the issue (PyArrow was installed in version `0.14.1` in my case).
Please see the Colab below:
https://colab.resear... | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | 52 | Requirements should specify pyarrow<1
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having insta... | [
-0.2761949301,
-0.14012447,
0.0347476006,
0.160385251,
0.0485610552,
-0.042708829,
0.0266226567,
0.1984370947,
-0.0582812279,
0.060050305,
0.0015629289,
0.2838503718,
-0.0044424045,
0.0376972072,
0.1520176828,
-0.3687957823,
0.2303958833,
0.5788679719,
-0.3444223106,
-0.0225952... |
https://github.com/huggingface/datasets/issues/705 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' | Hi !
Thanks for reporting :)
Indeed this is an issue on the `datasets` side.
I'm creating a PR | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
... | 19 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from ma... | [
-0.2919270992,
-0.6131456494,
-0.0538777038,
0.0724286214,
0.53585428,
0.0512855202,
0.5790147185,
0.3014276326,
0.2885265648,
0.1793081611,
-0.0349087976,
0.1695186496,
-0.0423111692,
-0.1191876456,
-0.1670348197,
-0.2730586529,
-0.1069537178,
0.1412786543,
-0.4372421205,
-0.0... |
https://github.com/huggingface/datasets/issues/699 | XNLI dataset is not loading | also i tried below code to solve checksum error
`datasets-cli test ./datasets/xnli --save_infos --all_configs`
and it shows
```
2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
... | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verifi... | 170 | XNLI dataset is not loading
`dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Check... | [
-0.2810516655,
0.1401007622,
-0.0911716521,
0.1260451227,
0.2266674936,
-0.2409972101,
0.3072920442,
0.4281841815,
0.0896518901,
-0.0614525452,
-0.0190946534,
0.4737130702,
0.1864164472,
0.1176806763,
0.1495383084,
0.2195383608,
0.0383551307,
0.0943840072,
-0.0668893307,
-0.100... |
https://github.com/huggingface/datasets/issues/699 | XNLI dataset is not loading | Hi !
Yes the download url changed.
It's updated on the master branch. I'm doing a release today to fix that :) | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verifi... | 22 | XNLI dataset is not loading
`dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Check... | [
-0.1545364559,
0.328220129,
-0.0674622878,
0.1016500518,
0.1184460819,
-0.0849033818,
0.1139142886,
0.351785779,
-0.0838161334,
-0.1345045716,
-0.0737447068,
0.3510937095,
0.2048694789,
0.0506397896,
0.1383060217,
0.3167422414,
0.1006804481,
0.0527608395,
-0.1457015574,
-0.1142... |
https://github.com/huggingface/datasets/issues/690 | XNLI dataset: NonMatchingChecksumError | Thanks for reporting.
The data file must have been updated by the host.
I'll update the checksum with the new one. | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr... | 21 | XNLI dataset: NonMatchingChecksumError
Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = lo... | [
-0.2586138844,
0.2562110424,
0.0357761532,
0.1724496037,
-0.0067086695,
-0.0366750695,
0.0808680505,
0.4067071378,
0.1755132377,
0.2336781472,
-0.2135458887,
0.3052040637,
-0.0187663715,
-0.0720470697,
-0.2150233388,
0.5297763944,
0.0938085616,
0.2160158902,
0.007998012,
0.0222... |
https://github.com/huggingface/datasets/issues/690 | XNLI dataset: NonMatchingChecksumError | I'll do a release in the next few days to make the fix available for everyone.
In the meantime you can load `xnli` with
```
xnli = load_dataset('xnli', script_version="master")
```
This will use the latest version of the xnli script (available on master branch), instead of the old one. | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr... | 49 | XNLI dataset: NonMatchingChecksumError
Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = lo... | [
-0.3440381885,
0.293183893,
-0.0001978044,
0.147711575,
0.0428990014,
-0.0164932441,
0.0669284239,
0.4335775673,
0.1969802827,
0.2615414858,
-0.1929386258,
0.3978195488,
-0.0380583405,
-0.0033610892,
-0.2366970479,
0.4933423996,
0.0317254178,
0.2110861838,
-0.0416039228,
0.0739... |
https://github.com/huggingface/datasets/issues/687 | `ArrowInvalid` occurs while running `Dataset.map()` function | Hi !
This is because `encode` expects one single text as input (str), or one tokenized text (List[str]).
I believe that you actually wanted to use `encode_batch` which expects a batch of texts.
However this method is only available for our "fast" tokenizers (ex: BertTokenizerFast).
BertJapanese is not one of them... | It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=Non... | 128 | `ArrowInvalid` occurs while running `Dataset.map()` function
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='st... | [
-0.3730497658,
0.0209991802,
-0.1026320085,
-0.0166303106,
0.1107027456,
0.1678619534,
0.1268473864,
0.3861753047,
-0.1962422282,
0.1160630137,
0.2094027996,
0.6037948728,
-0.116786696,
0.0404954478,
-0.3405692577,
0.0122456159,
-0.0138799874,
0.2132772505,
0.0759288073,
-0.204... |
https://github.com/huggingface/datasets/issues/687 | `ArrowInvalid` occurs while running `Dataset.map()` function | Thank you very much for the kind and precise suggestion!
I'm looking forward to seeing BertJapaneseTokenizer built into the "fast" tokenizers.
I tried `map` with multiprocessing as follows, and it worked!
```python
# There was a Pickle problem if I use `lambda` for multiprocessing
def encode(examples):
re... | It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=Non... | 61 | `ArrowInvalid` occurs while running `Dataset.map()` function
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='st... | [
-0.3730497658,
0.0209991802,
-0.1026320085,
-0.0166303106,
0.1107027456,
0.1678619534,
0.1268473864,
0.3861753047,
-0.1962422282,
0.1160630137,
0.2094027996,
0.6037948728,
-0.116786696,
0.0404954478,
-0.3405692577,
0.0122456159,
-0.0138799874,
0.2132772505,
0.0759288073,
-0.204... |
https://github.com/huggingface/datasets/issues/686 | Dataset browser url is still https://huggingface.co/nlp/viewer/ | Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new) | Might be worth updating to https://huggingface.co/datasets/viewer/ | 26 | Dataset browser url is still https://huggingface.co/nlp/viewer/
Might be worth updating to https://huggingface.co/datasets/viewer/
Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new) | [
-0.15292494,
0.2032144666,
-0.0998434499,
-0.1728724688,
0.1061698943,
0.0467372984,
0.1794256717,
0.2390704155,
-0.050067164,
-0.0185794104,
-0.1565577686,
0.3058709502,
0.1280385703,
0.1724736243,
0.3105864823,
0.0242812522,
0.0012545042,
-0.0140794292,
-0.1639333069,
-0.0225... |
https://github.com/huggingface/datasets/issues/678 | The download instructions for c4 datasets are not contained in the error message | Also not that C4 is a dataset that needs an Apache Beam runtime to be generated.
For example Dataflow, Spark, Flink etc.
Usually we generate the dataset on our side once and for all, but we haven't done it for C4 yet.
More info about beam datasets [here](https://huggingface.co/docs/datasets/beam_dataset.html)
L... | The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff... | 56 | The download instructions for c4 datasets are not contained in the error message
The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830... | [
-0.1576074511,
-0.2520469725,
-0.044344265,
0.2419734597,
0.3052388132,
-0.0173062216,
0.0462005809,
0.1740140915,
-0.0926162973,
0.1525099427,
0.011207982,
-0.0311052985,
-0.12809892,
0.5797958374,
-0.0943659618,
-0.2361066341,
-0.1385878921,
0.1183630899,
-0.21952793,
-0.0132... |
https://github.com/huggingface/datasets/issues/676 | train_test_split returns empty dataset item | Can you reproduce this example in a Colab so we can investigate? (or give more information on your software/hardware config) | I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
pri... | 20 | train_test_split returns empty dataset item
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split... | [
-0.1150354072,
-0.0503817424,
-0.0381935276,
0.3756464422,
-0.00808792,
0.2310484946,
0.6476926208,
0.2886785269,
-0.0222677104,
0.1684439778,
-0.0831826702,
0.4956980944,
-0.1554623693,
0.2236278206,
-0.1473043114,
-0.0612273812,
0.0395234972,
0.4169758856,
-0.0837049633,
-0.1... |
https://github.com/huggingface/datasets/issues/676 | train_test_split returns empty dataset item | We'll do a release pretty soon to include the fix :)
In the meantime you can install the lib from source if you want to | I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
pri... | 25 | train_test_split returns empty dataset item
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split... | [
-0.1176418215,
-0.0009290294,
-0.1334937215,
0.2837673128,
0.0316413939,
0.2487454414,
0.5152359605,
0.5261705518,
0.0605892166,
0.0824789479,
0.0223489348,
0.3803646564,
-0.1783174574,
0.1184178889,
-0.0266840905,
-0.1455713063,
0.0281708278,
0.3949722648,
-0.06274046,
-0.1058... |
https://github.com/huggingface/datasets/issues/674 | load_dataset() won't download in Windows | I have the same issue. Tried to download a few of them and not a single one is downloaded successfully.
This is the output:
```
>>> dataset = load_dataset('blended_skill_talk', split='train')
Using custom data configuration default <-- This step never ends
``` | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa... | 41 | load_dataset() won't download in Windows
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefin... | [
-0.4348372221,
0.2638885677,
-0.0968251005,
0.3132837117,
0.2672561705,
0.3075503707,
0.3540581763,
0.1106310412,
0.4425590038,
-0.0165285002,
0.1196889356,
-0.1064616963,
-0.0671281219,
0.2139158845,
0.1405705661,
-0.1002632976,
0.0185639635,
-0.0331001766,
0.0595742464,
-0.14... |
https://github.com/huggingface/datasets/issues/674 | load_dataset() won't download in Windows | This was fixed in #644
I'll do a new release soon :)
In the meantime you can run it by installing from source | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa... | 23 | load_dataset() won't download in Windows
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefin... | [
-0.5447332263,
0.2937762439,
-0.0705708414,
0.2974266708,
0.1420514435,
0.3211803734,
0.2740901411,
0.2108799368,
0.3762369454,
-0.0303393342,
0.1460883468,
-0.0534872822,
-0.0536377765,
0.099380888,
0.1741899997,
-0.0783414915,
0.0386147685,
0.0273938328,
0.0398453474,
-0.1172... |
https://github.com/huggingface/datasets/issues/674 | load_dataset() won't download in Windows | Closing since version 1.1.0 got released with Windows support :)
Let me know if it works for you now | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa... | 19 | load_dataset() won't download in Windows
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefin... | [
-0.5312045217,
0.2285065204,
-0.0575986058,
0.266649574,
0.1350666732,
0.323918432,
0.2561956048,
0.2198930681,
0.3572814763,
0.0436171703,
0.1882217377,
-0.0625391826,
-0.0545897298,
0.1764082611,
0.1357097328,
-0.08165925,
0.0941099674,
0.0226967838,
0.0226594284,
-0.10132161... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | We should try to regenerate the data using the official script.
But iirc that's what we used in the first place, so not sure why it didn't match in the first place.
I'll let you know when the dataset is updated | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 41 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.0942949355,
-0.3681171536,
-0.1456751525,
0.4842416942,
0.3399179578,
-0.0058331238,
0.1115036681,
-0.0186547711,
0.213875398,
0.260037303,
-0.2300762087,
0.2279763073,
0.1195970848,
0.4290391803,
0.1250694543,
0.115834102,
-0.0148444213,
-0.097562708,
-0.3769930303,
-0.2929... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | Thanks, looking forward to hearing your update on this thread.
This is a blocking issue for us; would appreciate any progress on this front. We can also help with the fix, if you deem it appropriately. | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 36 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.0870945379,
-0.3768456876,
-0.1521639675,
0.4688610733,
0.3402539194,
-0.0078840116,
0.1615424156,
-0.0307646319,
0.2476973534,
0.2832652032,
-0.2237007916,
0.2499283403,
0.1018444449,
0.4539227188,
0.0879230499,
0.10070052,
-0.035286475,
-0.1102674082,
-0.3657332957,
-0.292... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | I just started the generation on my side, I'll let you know how it goes :) | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 16 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.0991991088,
-0.3942703009,
-0.1512492001,
0.49925071,
0.3250366747,
0.0082150158,
0.1899513602,
-0.0334034264,
0.2197479904,
0.2975789309,
-0.1856801957,
0.2214598805,
0.0975367725,
0.4401782453,
0.1296775937,
0.150860399,
-0.0050971294,
-0.0683596134,
-0.3730555475,
-0.3198... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | Hmm after a first run I'm still missing 136668/226711 urls.
I'll relaunch it tomorrow to try to get the remaining ones. | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 21 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.0192337073,
-0.2787490785,
-0.1315879077,
0.4514375329,
0.3294107318,
0.013247543,
0.0834716111,
-0.0324112363,
0.1632883251,
0.2532369494,
-0.2140218616,
0.18124111,
0.1041177064,
0.343321234,
0.1806179285,
0.1001858488,
-0.0398059823,
-0.1415239125,
-0.4157708585,
-0.37894... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | So I managed to download them all but when parsing only 226,181/226,711 worked.
Not sure if it's worth digging and debugging parsing at this point :/ | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 26 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.084402971,
-0.3709561527,
-0.1518213004,
0.551538825,
0.359436661,
0.0405240841,
0.0683162957,
0.0596997701,
0.134743318,
0.2831320167,
-0.2639549971,
0.2220128179,
0.1665135473,
0.3795986474,
-0.0097541111,
0.0870641917,
-0.0458379686,
-0.0495312363,
-0.3420444429,
-0.27435... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | Thanks @lhoestq
It would be great to improve coverage, but IDs are the really crucial part for us. We'd really appreciate an update to the dataset with IDs either way! | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 30 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.1205384582,
-0.3434901834,
-0.1572830677,
0.4010955989,
0.3150457144,
0.0539409965,
0.1680676788,
0.0182595812,
0.2290301174,
0.2051196396,
-0.1785498708,
0.2475749552,
0.1144535094,
0.4436739683,
0.098384738,
0.1101117283,
-0.006561589,
-0.1371245682,
-0.3460201621,
-0.2986... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | I gave up at an even earlier point. The dataset I use has 204,017 train examples. | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 16 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.0992735326,
-0.3724808991,
-0.1397750378,
0.4924609363,
0.3237832189,
-0.0019077146,
0.1788343042,
-0.0305927861,
0.226830259,
0.2765002847,
-0.195772931,
0.2362506986,
0.0817245767,
0.416251868,
0.1343757659,
0.0998579785,
-0.0211556088,
-0.119515337,
-0.3618007898,
-0.2939... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | @lhoestq @sshleifer like @jbragg said earlier, the main issue for us is that the current XSUM dataset (in your package) does not have IDs suggested by the original dataset ([here is the file](https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json).) Would apprec... | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 63 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.1664320827,
-0.3865370452,
-0.1475522071,
0.4103154838,
0.3375099301,
-0.0053341058,
0.1342152804,
0.0427216701,
0.2139724493,
0.290692389,
-0.2072105855,
0.3043750226,
0.0520258397,
0.3954084814,
0.110866949,
0.1289569736,
-0.0369180366,
-0.0796163753,
-0.3687427342,
-0.278... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | >So I managed to download them all but when parsing only 226,181/226,711 worked.
@lhoestq any chance we could update the HF-hosted dataset with the IDs in your new version? Happy to help if there's something I can do. | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 38 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.1108687446,
-0.2911380529,
-0.1378348768,
0.4550744891,
0.2817801535,
0.0728298053,
0.0544090122,
0.0913589597,
0.2821642458,
0.2655706406,
-0.4389285147,
0.0682013407,
0.2014297843,
0.3437100053,
0.0978379771,
0.0983608738,
0.0280663017,
-0.0688480064,
-0.39984712,
-0.35203... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | Well I couldn't parse what I downloaded.
Unfortunately I think I won't be able to take a look at it this week.
I can try to send you what I got if you want to give it a shot @jbragg
Otherwise feel free to re-run the xsum download script, maybe you'll be luckier than me | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 55 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.0726335645,
-0.4036209881,
-0.1623656303,
0.5180847049,
0.3325615823,
0.0285639632,
0.1227810383,
-0.0330050103,
0.2053791583,
0.2613715231,
-0.2468281239,
0.2338204086,
0.0877350271,
0.4263717532,
0.1107674986,
0.0942092165,
-0.0165949259,
-0.1213089749,
-0.3660369217,
-0.3... |
https://github.com/huggingface/datasets/issues/669 | How to skip a example when running dataset.map | Hi @xixiaoyao,
Depending on what you want to do you can:
- use a first step of `filter` to filter out the invalid examples: https://huggingface.co/docs/datasets/processing.html#filtering-rows-select-and-filter
- or directly detect the invalid examples inside the callable used with `map` and return them unchanged or ... | in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map. | 95 | How to skip a example when running dataset.map
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
Hi @xixiaoyao,
Depending on what you want to do... | [
-0.3242636323,
-0.2246202826,
0.0313537568,
0.0002580616,
0.0896153748,
0.3332220912,
0.030069964,
0.1055710539,
0.1422978342,
0.2520426214,
0.6323099732,
0.4480102062,
-0.2600846589,
0.4908131361,
0.0828726962,
-0.0856684074,
-0.0504851453,
0.0724820793,
0.1984964162,
0.177938... |
https://github.com/huggingface/datasets/issues/667 | Loss not decrease with Datasets and Transformers | Hi did you manage to fix your issue ?
If so feel free to share your fix and close this thread | HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad data... | 21 | Loss not decrease with Datasets and Transformers
HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fi... | [
0.0143272132,
-0.0178403407,
0.0766257867,
0.270960182,
0.1909325421,
-0.2114585042,
0.3216446042,
0.1833658516,
-0.3186632097,
0.1440706402,
-0.1105744243,
0.2251770645,
0.0432988033,
-0.3711668849,
-0.3412017822,
-0.5152070522,
0.0927676037,
0.1171909422,
-0.417071104,
-0.308... |
https://github.com/huggingface/datasets/issues/666 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? | No they are other similar copies but they are not provided by the official Bert models authors. | 17 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?
No they are other similar copies but they are not provided by the official Bert models authors. | [
0.1324198991,
-0.133234933,
-0.0841304287,
0.4101325274,
-0.0619928353,
0.0686284304,
0.4990637302,
0.0508506447,
0.0587276965,
-0.1818360686,
-0.5121808052,
0.0105854897,
0.1604424715,
0.263495326,
0.3741165996,
-0.0626937747,
0.295380652,
0.0685095787,
-0.1061500013,
-0.48375... | |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | Hi !
It works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast.
Which version of transformers/datasets are you using ? | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 22 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.2387793064,
-0.2978487313,
-0.0196805913,
0.2330354303,
0.4450324476,
-0.1729923189,
0.2851523757,
0.1309392452,
-0.276091218,
0.1103134975,
-0.0697863922,
0.4891905189,
-0.0087261396,
-0.0467340946,
-0.1457095742,
-0.1114736199,
0.0535605326,
0.2078532726,
0.0181141309,
-0.... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Colab reproducing the error for us to be able to debug this error. | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 33 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.2387793064,
-0.2978487313,
-0.0196805913,
0.2330354303,
0.4450324476,
-0.1729923189,
0.2851523757,
0.1309392452,
-0.276091218,
0.1103134975,
-0.0697863922,
0.4891905189,
-0.0087261396,
-0.0467340946,
-0.1457095742,
-0.1114736199,
0.0535605326,
0.2078532726,
0.0181141309,
-0.... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | I have the same issue with `transformers/BertJapaneseTokenizer`.
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=None)
# }, num_rows: 99999)
t = BertJapaneseTokenizer.from_pretrained('bert-base-japanese-whole-word-masking'... | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 861 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.2387793064,
-0.2978487313,
-0.0196805913,
0.2330354303,
0.4450324476,
-0.1729923189,
0.2851523757,
0.1309392452,
-0.276091218,
0.1103134975,
-0.0697863922,
0.4891905189,
-0.0087261396,
-0.0467340946,
-0.1457095742,
-0.1114736199,
0.0535605326,
0.2078532726,
0.0181141309,
-0.... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | > I have the same issue with `transformers/BertJapaneseTokenizer`.
It looks like it this tokenizer is not supported unfortunately.
This is because `t.word_tokenizer.mecab` is a `fugashi.fugashi.GenericTagger` which is not compatible with pickle nor dill.
We need objects passes to `map` to be picklable for our ca... | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 153 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.2387793064,
-0.2978487313,
-0.0196805913,
0.2330354303,
0.4450324476,
-0.1729923189,
0.2851523757,
0.1309392452,
-0.276091218,
0.1103134975,
-0.0697863922,
0.4891905189,
-0.0087261396,
-0.0467340946,
-0.1457095742,
-0.1114736199,
0.0535605326,
0.2078532726,
0.0181141309,
-0.... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | We can also update the `BertJapaneseTokenizer` in `transformers` as you just shown @lhoestq to make it compatible with pickle. It will be faster than asking on fugashi 's repo and good for the other users of `transformers` as well.
I'm currently working on `transformers` I'll include it in the https://github.com/hug... | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 57 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.2387793064,
-0.2978487313,
-0.0196805913,
0.2330354303,
0.4450324476,
-0.1729923189,
0.2851523757,
0.1309392452,
-0.276091218,
0.1103134975,
-0.0697863922,
0.4891905189,
-0.0087261396,
-0.0467340946,
-0.1457095742,
-0.1114736199,
0.0535605326,
0.2078532726,
0.0181141309,
-0.... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | Thank you for the rapid and polite response!
@lhoestq Thanks for the suggestion! I've passed the pickle phase, but another `ArrowInvalid` problem occored. I created another issue #687 .
@thomwolf Wow, really fast work. I'm looking forward to the next release 🤗 | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 42 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.2387793064,
-0.2978487313,
-0.0196805913,
0.2330354303,
0.4450324476,
-0.1729923189,
0.2851523757,
0.1309392452,
-0.276091218,
0.1103134975,
-0.0697863922,
0.4891905189,
-0.0087261396,
-0.0467340946,
-0.1457095742,
-0.1114736199,
0.0535605326,
0.2078532726,
0.0181141309,
-0.... |
https://github.com/huggingface/datasets/issues/664 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable | Hi !
Thanks for reporting.
It looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.
Could you check that there exist at least one dataset builder class ? |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py') ... | 34 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise e... | [
-0.2243588269,
0.3296632469,
0.1114710793,
0.1580807865,
0.3371320367,
-0.0254656393,
0.6201821566,
0.3964117765,
-0.1371877789,
0.0546794273,
-0.0827252641,
0.4112724364,
-0.1245824471,
-0.024263991,
0.3396509588,
0.0455026962,
-0.0610546321,
0.2632742524,
-0.1969890445,
-0.23... |
https://github.com/huggingface/datasets/issues/657 | Squad Metric Description & Feature Mismatch | Thanks for reporting !
There indeed a mismatch between the features and the kwargs description
I believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `references=squad[... | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | 63 | Squad Metric Description & Feature Mismatch
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also... | [
0.0384344459,
-0.1900424063,
-0.0537598021,
-0.0725317523,
0.4192172587,
-0.0426004566,
0.1088643521,
0.0794382468,
-0.215908587,
0.1062575951,
-0.181975022,
0.4140356183,
0.3362864554,
-0.0604612976,
0.1003780216,
0.1568865627,
0.0473280028,
0.0839828923,
-0.0124866264,
-0.139... |
https://github.com/huggingface/datasets/issues/657 | Squad Metric Description & Feature Mismatch | But then providing the `answer_start` becomes mandatory since the format of the features is checked against the one provided in the squad [file](https://github.com/huggingface/datasets/pull/658/files). | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | 23 | Squad Metric Description & Feature Mismatch
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also... | [
0.0714832917,
-0.3686691523,
-0.0856219605,
-0.0963041261,
0.4095246196,
-0.1043118462,
0.1253432482,
0.039565444,
-0.2808917463,
0.0161716715,
-0.0155541645,
0.3508736789,
0.2142228782,
0.0877159685,
-0.1212029532,
0.1233919263,
0.0228275098,
0.114642702,
-0.1461588293,
-0.072... |
https://github.com/huggingface/datasets/issues/651 | Problem with JSON dataset format | Currently the `json` dataset doesn't support this format unfortunately.
However you could load it with
```python
from datasets import Dataset
import pandas as pd
df = pd.read_json("path_to_local.json", orient="index")
dataset = Dataset.from_pandas(df)
``` | I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note that instead of a list of records i... | 32 | Problem with JSON dataset format
I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note ... | [
0.1458692104,
0.1199861914,
-0.0658117086,
0.3802386522,
-0.0994930342,
0.2573180497,
0.237864539,
0.4517602324,
0.4588267803,
-0.0621232614,
0.1368803829,
0.485652566,
-0.145507887,
0.2548016012,
-0.3202169836,
-0.1480668932,
0.0985159874,
0.2186022699,
0.2703850865,
-0.047025... |
https://github.com/huggingface/datasets/issues/650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | Hi :)
In your dummy data zip file you can just have `subset000.xz` as directories instead of compressed files.
Let me know if it helps | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
d... | 25 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ s... | [
-0.2775551975,
0.0534974821,
0.0134970266,
0.3913993239,
-0.0723610148,
0.1356920153,
0.452233851,
0.3349657655,
-0.0165994558,
0.0843320787,
-0.0575095415,
0.2236391157,
-0.1632839739,
-0.0482926145,
0.0641592816,
-0.1906500459,
-0.1669516265,
0.2437196523,
0.0134652406,
0.031... |
https://github.com/huggingface/datasets/issues/650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | Thanks for your comment @lhoestq ,
Just for confirmation, changing dummy data like this won't make dummy test test the functionality to extract `subsetxxx.xz` but actually kind of circumvent it. But since we will test the real data so it is ok ? | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
d... | 43 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ s... | [
-0.2775551975,
0.0534974821,
0.0134970266,
0.3913993239,
-0.0723610148,
0.1356920153,
0.452233851,
0.3349657655,
-0.0165994558,
0.0843320787,
-0.0575095415,
0.2236391157,
-0.1632839739,
-0.0482926145,
0.0641592816,
-0.1906500459,
-0.1669516265,
0.2437196523,
0.0134652406,
0.031... |
https://github.com/huggingface/datasets/issues/650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | Yes it's fine for now. We plan to add a job for slow tests.
And at one point we'll also do another pass on the dummy data handling and consider extracting files. | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
d... | 32 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ s... | [
-0.2775551975,
0.0534974821,
0.0134970266,
0.3913993239,
-0.0723610148,
0.1356920153,
0.452233851,
0.3349657655,
-0.0165994558,
0.0843320787,
-0.0575095415,
0.2236391157,
-0.1632839739,
-0.0482926145,
0.0641592816,
-0.1906500459,
-0.1669516265,
0.2437196523,
0.0134652406,
0.031... |
https://github.com/huggingface/datasets/issues/649 | Inconsistent behavior in map | Thanks for reporting !
This issue must have appeared when we refactored type inference in `nlp`
By default the library tries to keep the same feature types when applying `map` but apparently it has troubles with nested structures. I'll try to fix that next week | I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem.
```python
import datasets
# Dataset with a single feature called 'field' consisting of two examples
d... | 45 | Inconsistent behavior in map
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem.
```python
import datasets
# Dataset with a single feature called 'field... | [
0.3283727765,
-0.2924291193,
-0.0719692037,
0.0814119279,
-0.0720761493,
-0.2064360082,
0.0637709722,
0.0195533521,
0.2041489929,
-0.0053225802,
0.292031765,
0.5868220925,
0.209287703,
0.1178401485,
-0.3162102997,
0.1631347388,
0.2691357136,
-0.0755069181,
0.0736095458,
-0.1366... |
https://github.com/huggingface/datasets/issues/647 | Cannot download dataset_info.json | Thanks for reporting !
We should add support for servers without internet connection indeed
I'll do that early next week | I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text... | 20 | Cannot download dataset_info.json
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com... | [
-0.2581690848,
0.0243744832,
-0.0594887286,
0.2031507641,
0.0788521543,
0.1288591027,
0.0959737375,
0.2360106558,
0.1991668493,
0.079568252,
0.1454199851,
0.2383176684,
0.2438560128,
0.1896516383,
0.1779322326,
-0.1044931561,
-0.1757184565,
0.0602589808,
0.0163061507,
0.1639958... |
https://github.com/huggingface/datasets/issues/647 | Cannot download dataset_info.json | Right now the recommended way is to create the dataset on a server with internet connection and then to save it and copy the serialized dataset to the server without internet connection. | I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text... | 32 | Cannot download dataset_info.json
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com... | [
-0.2739142478,
0.0575308204,
-0.0347834155,
0.1870901436,
0.0935545266,
0.1803250164,
0.0954203531,
0.2747620344,
0.082955271,
0.0868134648,
0.1172922254,
0.2410643399,
0.2068436891,
0.1971997619,
0.2280212194,
-0.0897161886,
-0.1748177707,
0.0688977838,
0.0379335023,
0.1446815... |
https://github.com/huggingface/datasets/issues/647 | Cannot download dataset_info.json | #652 should allow you to load text/json/csv/pandas datasets without an internet connection **IF** you've the dataset script locally.
Example:
If you have `datasets/text/text.py` locally, then you can do `load_dataset("./datasets/text", data_files=...)` | I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text... | 30 | Cannot download dataset_info.json
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com... | [
-0.2791209221,
0.0578072928,
-0.0465702079,
0.1835988164,
0.1229125708,
0.1868668795,
0.1547662616,
0.2711951435,
0.2081409097,
0.0419170409,
0.048986014,
0.2596796453,
0.282695502,
0.1925578862,
0.1847233921,
-0.0445271693,
-0.2054240704,
0.0323138088,
0.024126254,
0.176589638... |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | Thanks for reporting !
It uses a temporary file to write the data.
However it looks like the temporary file is not placed in the right directory during the processing | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | 30 | Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncati... | [
-0.0562821813,
0.1755033135,
-0.0487306379,
0.3908854425,
-0.0356468223,
-0.0572806224,
0.2760778069,
-0.0244469363,
0.0408121645,
0.1573937237,
0.0506477766,
0.1766867787,
0.0519223511,
0.3638363779,
-0.03565754,
0.3218174875,
0.2768937349,
-0.0544237867,
0.0543462075,
-0.0435... |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | Well actually I just tested and the temporary file is placed in the same directory, so it should work as expected.
Which version of `datasets` are you using ? | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | 29 | Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncati... | [
-0.0980262011,
0.1880931705,
-0.0280871168,
0.3890115321,
-0.0152986562,
-0.0114679523,
0.3272337019,
-0.0096282084,
0.0338673629,
0.2286567837,
0.0054277871,
0.2275296748,
0.0270036869,
0.3596074879,
-0.0818916261,
0.3205984831,
0.2760050893,
-0.0400121063,
0.064423278,
-0.055... |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | It looks like a pyarrow issue with google colab.
For some reason this code increases the disk usage of google colab while it actually writes into google drive:
```python
import pyarrow as pa
stream = pa.OSFile("/content/drive/My Drive/path/to/file.arrow", "wb")
writer = pa.RecordBatchStreamWriter(stream, schem... | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | 74 | Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncati... | [
0.0214204751,
0.2508094907,
-0.0044861445,
0.404622823,
-0.059329439,
-0.097665906,
0.3005951643,
0.0051360354,
-0.2152587324,
0.1271966547,
-0.0451291576,
0.3386110067,
0.1068909019,
0.293412149,
0.06055611,
0.2235910147,
0.2913429439,
-0.0080434429,
0.110989958,
-0.0101503907... |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | Actually I did more tests it doesn't >.<
I'll let you know if I find a way to fix that | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | 20 | Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncati... | [
-0.100196965,
0.1869411469,
-0.0199803077,
0.4581358135,
-0.0414947458,
-0.0328616202,
0.310751766,
0.0137842586,
0.0069227661,
0.1962366849,
0.0152606834,
0.2812228799,
0.0530099608,
0.4453214407,
-0.0918109193,
0.3288263083,
0.2770380676,
-0.0716738701,
0.0422289968,
-0.08333... |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | Actually I also have the issue when writing a regular text file
```python
f = open("/content/drive/My Drive/path/to/file", "w")
f.write(("a"*511 + "\n") * ((1 << 30) // 512)) # 1GiB
f.close()
```
Is that supposed to happen ? | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | 37 | Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncati... | [
-0.0809622034,
0.1509337276,
-0.0047162957,
0.4641734362,
-0.0074454453,
-0.0837902278,
0.378277272,
-0.0160656814,
-0.0122850528,
0.151423797,
0.0223203972,
0.2040353268,
0.1237225235,
0.4594928026,
-0.0932220742,
0.3096235693,
0.2900454402,
-0.0745369494,
0.0555243231,
-0.103... |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | The code you wrote should write a 1GB file in the Google Drive folder. Doesn't it? | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | 16 | Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncati... | [
-0.0446972065,
0.1795122921,
-0.0474339724,
0.4067885578,
-0.0353465229,
-0.0158606544,
0.3132925034,
0.0269284993,
-0.0032728014,
0.2124623507,
0.0527833179,
0.2332564294,
0.0173065271,
0.4683628082,
-0.0888159275,
0.3117386997,
0.2533088923,
-0.0614045523,
0.0652198792,
-0.07... |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | I could check it and as you say as I write to te Drive disk the colab disk also increases... | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | 20 | Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncati... | [
-0.0909002498,
0.0898179933,
-0.0347920954,
0.4908718765,
-0.0633707717,
-0.0270623807,
0.2676790655,
0.0049587754,
0.024215566,
0.227923587,
-0.0148067949,
0.2577181458,
0.0650782213,
0.46861431,
-0.0855878443,
0.3258921802,
0.2614868581,
-0.0908619761,
0.0236125942,
-0.088964... |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | To reproduce it:
```bash
!df -h | grep sda1
```
```python
f = open("/content/drive/My Drive/test_to_remove.txt", "w")
f.write(("a"*511 + "\n") * ((1 << 30) // 512)) # 1GiB
f.write(("a"*511 + "\n") * ((1 << 30) // 512)) # 1GiB
f.close()
```
```bash
!ls -lh /content/drive/My\ Drive/test_to_remove.txt
!df... | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | 56 | Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncati... | [
-0.0841227099,
0.1945448816,
-0.0163263883,
0.4593134522,
-0.0439057387,
-0.0728671029,
0.3859434426,
0.030553652,
0.033721216,
0.2072657049,
-0.0354525633,
0.2843729258,
0.0471326411,
0.405802995,
-0.1138570905,
0.3554212451,
0.277764231,
-0.1252812594,
-0.0011572756,
-0.11845... |
https://github.com/huggingface/datasets/issues/643 | Caching processed dataset at wrong folder | Apparently, Colab uses a local cache of the data files read/written from Google Drive. See:
- https://github.com/googlecolab/colabtools/issues/2087#issuecomment-860818457
- https://github.com/googlecolab/colabtools/issues/1915#issuecomment-804234540
- https://github.com/googlecolab/colabtools/issues/2147#issuecommen... | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | 21 | Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncati... | [
-0.0610346422,
0.2720839083,
-0.0283775385,
0.4453432858,
-0.0815309957,
-0.021029219,
0.3334374428,
0.0359667353,
-0.0489297956,
0.2337762415,
-0.0136981849,
0.2756026089,
0.036514245,
0.3952311575,
-0.0828426033,
0.3347193003,
0.2051367611,
-0.0510049984,
0.0500108413,
-0.085... |
https://github.com/huggingface/datasets/issues/633 | Load large text file for LM pre-training resulting in OOM | Not sure what could cause that on the `datasets` side. Could this be a `Trainer` issue ? cc @julien-c @sgugger ? | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator u... | 21 | Load large text file for LM pre-training resulting in OOM
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(Dat... | [
-0.6339287758,
-0.4775367975,
0.0106936339,
0.2986355424,
0.3600475788,
-0.1518250853,
0.5567325354,
0.3738059103,
0.0108824177,
0.0107197072,
-0.1295929253,
-0.18279998,
-0.2669847012,
-0.1620898843,
-0.0323720984,
0.0232064184,
-0.0998212174,
0.1968754083,
-0.2772670686,
-0.0... |
https://github.com/huggingface/datasets/issues/633 | Load large text file for LM pre-training resulting in OOM | There was a memory leak issue fixed recently in master. You should install from source and see if it fixes your problem. | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator u... | 22 | Load large text file for LM pre-training resulting in OOM
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(Dat... | [
-0.6339287758,
-0.4775367975,
0.0106936339,
0.2986355424,
0.3600475788,
-0.1518250853,
0.5567325354,
0.3738059103,
0.0108824177,
0.0107197072,
-0.1295929253,
-0.18279998,
-0.2669847012,
-0.1620898843,
-0.0323720984,
0.0232064184,
-0.0998212174,
0.1968754083,
-0.2772670686,
-0.0... |
https://github.com/huggingface/datasets/issues/633 | Load large text file for LM pre-training resulting in OOM | @lhoestq @sgugger Thanks for your comments. I have install from source code as you told, but the problem is still there.
To reproduce the issue, just replace [these lines](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L241-L258) with:
(load_dataset and Da... | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator u... | 80 | Load large text file for LM pre-training resulting in OOM
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(Dat... | [
-0.6339287758,
-0.4775367975,
0.0106936339,
0.2986355424,
0.3600475788,
-0.1518250853,
0.5567325354,
0.3738059103,
0.0108824177,
0.0107197072,
-0.1295929253,
-0.18279998,
-0.2669847012,
-0.1620898843,
-0.0323720984,
0.0232064184,
-0.0998212174,
0.1968754083,
-0.2772670686,
-0.0... |
https://github.com/huggingface/datasets/issues/633 | Load large text file for LM pre-training resulting in OOM | Same here. Pre-training on wikitext-103 to do some test. At the end of the training it takes 32GB of RAM + ~30GB of SWAP. I installed dataset==1.1.0, not built from source. I will try uninstalling and building from source when it finish. | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator u... | 42 | Load large text file for LM pre-training resulting in OOM
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(Dat... | [
-0.6339287758,
-0.4775367975,
0.0106936339,
0.2986355424,
0.3600475788,
-0.1518250853,
0.5567325354,
0.3738059103,
0.0108824177,
0.0107197072,
-0.1295929253,
-0.18279998,
-0.2669847012,
-0.1620898843,
-0.0323720984,
0.0232064184,
-0.0998212174,
0.1968754083,
-0.2772670686,
-0.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.