html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
63
51.8k
body
stringlengths
0
36.2k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/611
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
Thanks and some more on the `embeddings` and `picture_url` would be nice as well (type and max lengths of the elements)
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb)...
21
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most rece...
[ -0.2813220918, -0.0744088963, -0.2195497453, 0.4159465432, 0.2823321819, -0.0299139991, 0.4081524312, 0.211316064, 0.3901509643, 0.0474444032, 0.0003714644, 0.3082345128, -0.086982362, 0.0685478151, 0.000835738, -0.3419587016, -0.0933910981, 0.227939263, -0.2755077481, 0.093867...
https://github.com/huggingface/datasets/issues/611
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
It looks like a Pyarrow limitation. I was able to reproduce the error with ```python import pandas as pd import numpy as np import pyarrow as pa n = 1713614 df = pd.DataFrame.from_dict({"a": list(np.zeros((n, 128))), "b": range(n)}) pa.Table.from_pandas(df) ``` I also tried with 50% of the dataframe a...
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb)...
75
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most rece...
[ -0.2813220918, -0.0744088963, -0.2195497453, 0.4159465432, 0.2823321819, -0.0299139991, 0.4081524312, 0.211316064, 0.3901509643, 0.0474444032, 0.0003714644, 0.3082345128, -0.086982362, 0.0685478151, 0.000835738, -0.3419587016, -0.0933910981, 0.227939263, -0.2755077481, 0.093867...
https://github.com/huggingface/datasets/issues/611
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
It looks like it's going to be fixed in pyarrow 2.0.0 :) In the meantime I suggest to chunk big dataframes to create several small datasets, and then concatenate them using [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datas...
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb)...
32
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most rece...
[ -0.2813220918, -0.0744088963, -0.2195497453, 0.4159465432, 0.2823321819, -0.0299139991, 0.4081524312, 0.211316064, 0.3901509643, 0.0474444032, 0.0003714644, 0.3082345128, -0.086982362, 0.0685478151, 0.000835738, -0.3419587016, -0.0933910981, 0.227939263, -0.2755077481, 0.093867...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Could you try ```python load_dataset('text', data_files='test.txt',cache_dir="./", split="train") ``` ? `load_dataset` returns a dictionary by default, like {"train": your_dataset}
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
18
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Hi @lhoestq Thanks for your suggestion. I tried ``` dataset = load_dataset('text', data_files='test.txt',cache_dir="./", split="train") print(dataset) dataset.set_format(type='torch',columns=["text"]) dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) next(iter(dataloader)) ``` But it still ...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
312
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
You need to tokenize the string inputs to convert them in integers before you can feed them to a pytorch dataloader. You can read the quicktour of the datasets or the transformers libraries to know more about that: - transformers: https://huggingface.co/transformers/quicktour.html - dataset: https://huggingface.co...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
44
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB). But finally got it working. This is what I did after looking into the documentation. 1. split the whole dataset file into smaller files ```bash mkdir ./shards split -a 4 -l 256000 -d full_raw_corpus.txt ./shards/shard_ ```` 2. Pa...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
125
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Thanks, @thomwolf and @sipah00 , I tried to implement your suggestions in my scripts. Now, I am facing some connection time-out error. I am using my local file, I have no idea why the module request s3 database. The log is: ``` Traceback (most recent call last): File "/home/.local/lib/python3.6/site-packa...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
248
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
I noticed this is because I use a cloud server where does not provide for connections from our standard compute nodes to outside resources. For the `datasets` package, it seems that if the loading script is not already cached in the library it will attempt to connect to an AWS resource to download the dataset loadi...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
76
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
I solved the above issue by downloading text.py manually and passing the path to the `load_dataset` function. Now, I have a new issue with the Read-only file system. The error is: ``` I0916 22:14:38.453380 140737353971520 filelock.py:274] Lock 140734268996072 acquired on /scratch/chiyuzh/roberta/text.py.lock ...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
214
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB). > But finally got it working. This is what I did after looking into the documentation. > > 1. split the whole dataset file into smaller files > > ```shell > mkdir ./shards > split -a 4 -l 256000 -d full_raw_corpus.txt ./shards/...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
254
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> > Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB). > > But finally got it working. This is what I did after looking into the documentation. > > > > 1. split the whole dataset file into smaller files > > > > ```shell > > mkdir ./shards > > split -a 4 -l 256000 -d full_raw_corp...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
331
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> ```python > def encode(examples): > return tokenizer(examples['text'], truncation=True, padding='max_length') > ``` It is the same as suggested: > def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length')
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
25
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> > ```python > > def encode(examples): > > return tokenizer(examples['text'], truncation=True, padding='max_length') > > ``` > > It is the same as suggested: > > > def encode(examples): > > return tokenizer(examples['text'], truncation=True, padding='max_length') Do you use this function in a `class` ob...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
60
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> > > ```python > > > def encode(examples): > > > return tokenizer(examples['text'], truncation=True, padding='max_length') > > > ``` > > > > > > It is the same as suggested: > > > def encode(examples): > > > return tokenizer(examples['text'], truncation=True, padding='max_length') > > Do you use this fu...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
250
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> > > > ```python > > > > def encode(examples): > > > > return tokenizer(examples['text'], truncation=True, padding='max_length') > > > > ``` > > > > > > > > > It is the same as suggested: > > > > def encode(examples): > > > > return tokenizer(examples['text'], truncation=True, padding='max_length') > > ...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
357
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
@chiyuzhang94 Thanks for your reply. After some changes, currently, I managed to make the data loading process running. I published it in case you might want to take a look. Thanks for your help! https://github.com/shizhediao/Transformers_TPU
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
35
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Hi @shizhediao , Thanks! It looks great! But my problem still is the cache directory is a read-only file system. [As I mentioned](https://github.com/huggingface/datasets/issues/610#issuecomment-693912285), I tried to change the cache directory but it didn't work. Do you have any suggestions?
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
39
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> I installed datasets at /project/chiyuzh/evn_py36/datasets/src where is a writable directory. > I also tried change the environment variables to the writable directory: > `export HF_MODULES_PATH=/project/chiyuzh/evn_py36/datasets/cache_dir/` I think it is `HF_MODULES_CACHE` and not `HF_MODULES_PATH` @chiyuzhang9...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
50
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
We should probably add a section in the doc on the caching system with the env variables in particular.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
19
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Hi @thomwolf , @lhoestq , Thanks for your suggestions. With the latest version of this package, I can load text data without Internet. But I found the speed of dataset loading is very slow. My scrips like this: ``` def token_encode(examples): tokenizer_out = tokenizer(examples['text'], trunca...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
129
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
You can use multiprocessing by specifying `num_proc=` in `.map()` Also it looks like you have `1123871` batches of 1000 elements (default batch size), i.e. 1,123,871,000 lines in total. Am I right ?
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
32
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> You can use multiprocessing by specifying `num_proc=` in `.map()` > > Also it looks like you have `1123871` batches of 1000 elements (default batch size), i.e. 1,123,871,000 lines in total. > Am I right ? Hi @lhoestq , Thanks. I will try it. You are right. I have 1,123,870,657 lines totally in the path. ...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
141
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Hi @lhoestq , I tried to use multi-processor, but I got errors as follow: Because I am using python distributed training, it seems some conflicts with the distributed job. Do you have any suggestions? ``` I0925 10:19:35.603023 140737353971520 filelock.py:318] Lock 140737229443368 released on /tmp/pbs.1120510...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
157
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
For multiprocessing, the function given to `map` must be picklable. Maybe you could try to define `token_encode` outside `HG_Datasets` ? Also maybe #656 could make functions defined locally picklable for multiprocessing, once it's merged.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
34
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> I have another question. Because I am using a cloud server where only allows running a job up to 7 days. Hence, I need to resume my model every week. If the script needs to load and process the dataset every time. It is very low efficient based on the current processing speed. Is it possible that I process the datase...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
100
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Hi @lhoestq , Thanks for your suggestion. I tried to process the dataset and save it to disk. I have 1.12B samples in the raw dataset. I used 16 processors. I run this process job for 7 days. But it didn't finish. I don't why the processing is such slow. The log shows that some processors (\#12, \#14, \#15)...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
219
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Hi ! As far as I can tell, there could be several reasons for your processes to have different speeds: - some parts of your dataset have short passages while some have longer passages, that take more time to be processed - OR there are other processes running that prevent some of them to run at full speed - OR th...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
174
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> Do you use a fast or a slow tokenizer from the `transformers` library @chiyuzhang94? Hi @thomwolf , I use this: ``` from transformers import AutoTokenizer.from_pretrained(args.model_name_or_path, cache_dir=args.cache_dir) ``` I guess this is a slow one, let me explore the fast tokenizer.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
41
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
> Hi ! > > As far as I can tell, there could be several reasons for your processes to have different speeds: > > * some parts of your dataset have short passages while some have longer passages, that take more time to be processed > * OR there are other processes running that prevent some of them to run at full ...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
312
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/610
Load text file for RoBERTa pre-training.
Hi @thomwolf I am using `RobertaTokenizerFast` now. But the speed is still imbalanced, some processors are still slow. Here is the part of the log. #0 is always much fast than lower rank processors. ``` #15: 3%|▎ | 115/3513 [3:18:36<98:01:33, 103.85s/ba] #2: 24%|██▍ | 847/3513 [3:20:43<11...
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
198
Load text file for RoBERTa pre-training. I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried t...
[ -0.2351641655, -0.2028223872, -0.0119608259, 0.408446759, 0.3831169605, -0.1324439198, 0.513992548, 0.4229527414, -0.1767198443, 0.048769027, -0.2432832271, 0.1370510906, -0.1445557326, -0.3892418742, -0.038836401, -0.3020623922, -0.1219130978, 0.1924060285, -0.4711352587, -0.0...
https://github.com/huggingface/datasets/issues/600
Pickling error when loading dataset
I wasn't able to reproduce on google colab (python 3.6.9 as well) with pickle==4.0 dill=0.3.2 transformers==3.1.0 datasets=1.0.1 (also tried nlp 0.4.0) If I try ```python from datasets import load_dataset # or from nlp from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("...
Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_da...
61
Pickling error when loading dataset Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_si...
[ -0.2029625177, -0.191864118, 0.1510746032, 0.265044421, 0.1417568773, -0.1736164987, 0.2733695805, 0.3465311229, 0.0597778335, -0.0787869915, 0.0942319036, 0.3698835075, -0.2363301665, 0.0360409319, 0.0861706808, -0.4138219655, -0.0133402422, 0.1228951737, -0.1728640646, 0.0111...
https://github.com/huggingface/datasets/issues/600
Pickling error when loading dataset
Closing since it looks like it's working on >= 3.6.9 Feel free to re-open if you have other questions :)
Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_da...
20
Pickling error when loading dataset Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_si...
[ -0.2029625177, -0.191864118, 0.1510746032, 0.265044421, 0.1417568773, -0.1736164987, 0.2733695805, 0.3465311229, 0.0597778335, -0.0787869915, 0.0942319036, 0.3698835075, -0.2363301665, 0.0360409319, 0.0861706808, -0.4138219655, -0.0133402422, 0.1228951737, -0.1728640646, 0.0111...
https://github.com/huggingface/datasets/issues/598
The current version of the package on github has an error when loading dataset
Thanks for reporting ! Which version of transformers are you using ? It looks like it doesn't have the PreTrainedTokenizerBase class
Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``` git clone https://github.com/huggingface/nlp.git cd nlp pip install -e . ``...
21
The current version of the package on github has an error when loading dataset Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``...
[ -0.223743692, -0.2012974173, -0.0425650477, 0.0305603985, 0.0355139822, -0.0190251991, -0.1124797165, 0.2452017665, -0.0017799253, -0.2171768099, 0.2775867581, 0.3542349637, -0.2464632988, 0.2480498254, 0.3240589499, -0.2881518304, 0.0670979768, 0.3065454364, -0.237520799, 0.05...
https://github.com/huggingface/datasets/issues/598
The current version of the package on github has an error when loading dataset
I was using transformer 2.9. And I switch to the latest transformer package. Everything works just fine!! Thanks for helping! I should look more carefully next time. Didn't realize loading the data part requires using tokenizer.
Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``` git clone https://github.com/huggingface/nlp.git cd nlp pip install -e . ``...
36
The current version of the package on github has an error when loading dataset Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``...
[ -0.223743692, -0.2012974173, -0.0425650477, 0.0305603985, 0.0355139822, -0.0190251991, -0.1124797165, 0.2452017665, -0.0017799253, -0.2171768099, 0.2775867581, 0.3542349637, -0.2464632988, 0.2480498254, 0.3240589499, -0.2881518304, 0.0670979768, 0.3065454364, -0.237520799, 0.05...
https://github.com/huggingface/datasets/issues/598
The current version of the package on github has an error when loading dataset
Yes it shouldn’t fail with older version of transformers since this is only a special feature to make caching more efficient when using transformers for tokenization. We’ll update this.
Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``` git clone https://github.com/huggingface/nlp.git cd nlp pip install -e . ``...
29
The current version of the package on github has an error when loading dataset Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``...
[ -0.223743692, -0.2012974173, -0.0425650477, 0.0305603985, 0.0355139822, -0.0190251991, -0.1124797165, 0.2452017665, -0.0017799253, -0.2171768099, 0.2775867581, 0.3542349637, -0.2464632988, 0.2480498254, 0.3240589499, -0.2881518304, 0.0670979768, 0.3065454364, -0.237520799, 0.05...
https://github.com/huggingface/datasets/issues/597
Indices incorrect with multiprocessing
I fixed a bug that could cause this issue earlier today. Could you pull the latest version and try again ?
When `num_proc` > 1, the indices argument passed to the map function is incorrect: ```python d = load_dataset('imdb', split='test[:1%]') def fn(x, inds): print(inds) return x d.select(range(10)).map(fn, with_indices=True, batched=True) # [0, 1] # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] d.select(range(10...
21
Indices incorrect with multiprocessing When `num_proc` > 1, the indices argument passed to the map function is incorrect: ```python d = load_dataset('imdb', split='test[:1%]') def fn(x, inds): print(inds) return x d.select(range(10)).map(fn, with_indices=True, batched=True) # [0, 1] # [0, 1, 2, ...
[ -0.4318974614, -0.3361009657, -0.1858978719, 0.2841148674, -0.2439695597, -0.0441071503, 0.4962970614, 0.0556303039, 0.2259729803, 0.3695326447, -0.0362008438, 0.3468488753, 0.1309847683, 0.1321683973, -0.2639611065, 0.1251574755, -0.0425509997, 0.0501092598, -0.1806093901, -0....
https://github.com/huggingface/datasets/issues/597
Indices incorrect with multiprocessing
Still the case on master. I guess we should have an offset in the multi-procs indeed (hopefully it's enough). Also, side note is that we should add some logging before the "test" to say we are testing the function otherwise its confusing for the user to see two outputs I think. Proposal (see the "Testing the mapped...
When `num_proc` > 1, the indices argument passed to the map function is incorrect: ```python d = load_dataset('imdb', split='test[:1%]') def fn(x, inds): print(inds) return x d.select(range(10)).map(fn, with_indices=True, batched=True) # [0, 1] # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] d.select(range(10...
163
Indices incorrect with multiprocessing When `num_proc` > 1, the indices argument passed to the map function is incorrect: ```python d = load_dataset('imdb', split='test[:1%]') def fn(x, inds): print(inds) return x d.select(range(10)).map(fn, with_indices=True, batched=True) # [0, 1] # [0, 1, 2, ...
[ -0.391076684, -0.4112409353, -0.1836056113, 0.2711610496, -0.3301047087, -0.1160227731, 0.5005037785, 0.1166044101, 0.1060397699, 0.4203521609, 0.0210540742, 0.3124268651, 0.0579330362, 0.1175345331, -0.2667938471, 0.2599061131, -0.0655997545, 0.0601687841, -0.3335288763, -0.18...
https://github.com/huggingface/datasets/issues/595
`Dataset`/`DatasetDict` has no attribute 'save_to_disk'
`pip install git+https://github.com/huggingface/nlp.git` should have done the job. Did you uninstall `nlp` before installing from github ?
Hi, As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.p...
17
`Dataset`/`DatasetDict` has no attribute 'save_to_disk' Hi, As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) ...
[ -0.0576852225, 0.3064987957, -0.0244450867, 0.0952449962, 0.189276278, 0.2053218335, -0.0553885289, 0.1952714026, -0.1948142499, -0.0407714918, 0.1610012949, 0.6358866692, -0.2991713881, -0.0846180096, 0.4080448747, 0.015364157, 0.2040481716, 0.4670260549, 0.0720531642, -0.2329...
https://github.com/huggingface/datasets/issues/595
`Dataset`/`DatasetDict` has no attribute 'save_to_disk'
> Did you uninstall `nlp` before installing from github ? I did not. I created a new environment and installed `nlp` directly from `github` and it worked! Thanks.
Hi, As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.p...
28
`Dataset`/`DatasetDict` has no attribute 'save_to_disk' Hi, As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) ...
[ -0.051176209, 0.3495247364, -0.0176615864, 0.1092850193, 0.1668481976, 0.1748827696, -0.0380238518, 0.1976731271, -0.1885311007, -0.0432850011, 0.1474730968, 0.6693129539, -0.3041143119, -0.0753773004, 0.4125317633, 0.0165581647, 0.203982234, 0.453209281, 0.0589581504, -0.23290...
https://github.com/huggingface/datasets/issues/590
The process cannot access the file because it is being used by another process (windows)
Hi, which version of `nlp` are you using? By the way we'll be releasing today a significant update fixing many issues (but also comprising a few breaking changes). You can see more informations here #545 and try it by installing from source from the master branch.
Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map shutil.move(tmp_file....
46
The process cannot access the file because it is being used by another process (windows) Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\env...
[ -0.1765671074, 0.1705411524, -0.0338350646, 0.1972523332, 0.2760627866, 0.0469865352, 0.24616386, 0.3142566681, 0.1317552775, 0.1372043192, -0.0612204596, 0.5293021798, -0.3115154505, -0.068044059, 0.0767637566, 0.0142779127, -0.027466733, 0.1223231554, 0.0646730661, 0.15223377...
https://github.com/huggingface/datasets/issues/590
The process cannot access the file because it is being used by another process (windows)
Ok, it's probably fixed on master. Otherwise if you can give me a fully self-contained exemple to reproduce the error, I can try to investigate.
Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map shutil.move(tmp_file....
25
The process cannot access the file because it is being used by another process (windows) Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\env...
[ -0.2114655823, 0.0665595904, -0.0459682867, 0.2789512575, 0.3627558649, 0.1312585473, 0.3703976572, 0.2482225001, 0.2258243263, 0.2196417451, -0.1714620292, 0.2673845291, -0.1824053079, -0.1288235635, -0.1086884812, 0.065444693, -0.0072369291, 0.0088252928, 0.0774260163, 0.2269...
https://github.com/huggingface/datasets/issues/590
The process cannot access the file because it is being used by another process (windows)
I get the same behavior, on Windows, when `map`ping a function to a loaded dataset. The error doesn't occur if I re-run the cell a second time though! I'm on version 1.0.1.
Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map shutil.move(tmp_file....
32
The process cannot access the file because it is being used by another process (windows) Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\env...
[ -0.1413621306, 0.0115137417, -0.0121004293, 0.0894323811, 0.3479507565, 0.058795955, 0.4676166177, 0.1985684484, 0.2368674427, 0.2460597306, -0.2052187175, 0.2554391026, -0.0708039477, -0.2149477303, 0.0211807322, 0.1801768392, 0.0048986962, 0.097813271, 0.1116589531, 0.2412268...
https://github.com/huggingface/datasets/issues/590
The process cannot access the file because it is being used by another process (windows)
@saareliad I got the same issue that troubled me quite a while. Unfortunately, there are no good answers to this issue online, I tried it on Linux and that's absolutely fine. After hacking the source code, I solved this problem as follows. In the source code file: arrow_dataset.py -> _map_single(...) change ```p...
Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map shutil.move(tmp_file....
111
The process cannot access the file because it is being used by another process (windows) Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\env...
[ -0.1046754345, 0.1074614748, -0.0233314205, 0.2269621342, 0.3214992583, 0.0007901794, 0.4211468101, 0.1908869743, 0.1951684654, 0.1922734231, -0.176742658, 0.3607426286, -0.1743267775, -0.2670362294, -0.12334764, 0.0355835073, 0.0496971682, 0.0130527839, 0.2309347391, 0.2061700...
https://github.com/huggingface/datasets/issues/590
The process cannot access the file because it is being used by another process (windows)
@wangcongcong123 thanks for sharing. (BTW I also solved it locally on windows by putting the problematic line under try except and not using cache... On windows I just needed 1% of the dataset anyway)
Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map shutil.move(tmp_file....
34
The process cannot access the file because it is being used by another process (windows) Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\env...
[ -0.1660127044, -0.0378361195, -0.0557806231, 0.2940470874, 0.3540612459, 0.1522649676, 0.336876452, 0.1749327332, 0.2806836367, 0.2250538021, -0.1342354268, 0.2158223689, -0.1404850185, -0.1683883071, -0.1173464507, 0.0108609926, 0.0012657129, 0.0442124121, 0.2459805757, 0.2743...
https://github.com/huggingface/datasets/issues/580
nlp re-creates already-there caches when using a script, but not within a shell
Couln't reproduce on my side :/ let me know if you manage to reproduce on another env (colab for example)
`nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell. Example: try running ``` import nlp hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0) hans_hard_data = nlp.load_dataset('hans', s...
20
nlp re-creates already-there caches when using a script, but not within a shell `nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell. Example: try running ``` import nlp hans_easy_data = nlp.load_dataset('hans', split="validatio...
[ 0.0140549447, 0.1271245033, 0.0085561499, 0.0359591693, -0.0078008161, -0.2637971044, 0.155112192, 0.1553376764, 0.3577841818, -0.1724364907, -0.1574477702, 0.287982285, -0.0159156788, -0.1409790367, 0.3365516961, 0.1459570676, -0.0274407286, 0.1052430123, 0.1590004116, -0.0974...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
Some wikipedia languages have already been processed by us and are hosted on our google storage. This is the case for "fr" and "en" for example. For other smaller languages (in terms of bytes), they are directly downloaded and parsed from the wikipedia dump site. Parsing can take some time for languages with hundre...
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
88
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.2373758256, -0.2088863403, -0.1352215111, 0.4252156317, 0.1233771667, 0.275894165, 0.1313817501, 0.135695532, 0.7319021821, -0.2289886624, 0.0522107854, 0.0235067122, 0.1044931784, -0.2209716141, 0.0679350197, -0.1553102434, 0.0418750234, -0.11144609, 0.1551073194, -0.3610844...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
Ok, thanks for clarifying, that makes sense. I will time those examples later today and post back here. Also, it seems that not all dumps should use the same date. For instance, I was checking the Spanish dump doing the following: ``` data = nlp.load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner', ...
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
252
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.3074043691, -0.1131634563, -0.1143274233, 0.2666959167, 0.0679333434, 0.215950489, 0.188642323, 0.1882863045, 0.6027125716, -0.2003646344, 0.0329797529, 0.0883672461, 0.1432430297, -0.2862561345, 0.0379250422, -0.1857150346, 0.0711627379, -0.05266954, 0.099996455, -0.32325860...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
Thanks ! This will be very helpful. About the date issue, I think it's possible to use another date with ```python load_dataset("wikipedia", language="es", date="...", beam_runner="...") ``` However we've not processed wikipedia dumps for other dates than 20200501 (yet ?) One more thing that is specific t...
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
77
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.1844168305, -0.1612694263, -0.1408633292, 0.3425940871, 0.0797083452, 0.2668134868, 0.1986878812, 0.1461936831, 0.7380706668, -0.2608359158, 0.1171044111, 0.1079085693, 0.0482873134, -0.1868232042, 0.0673660412, -0.2168418914, 0.1130850539, -0.1392921507, 0.0121253952, -0.298...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
Cool! Thanks for the trick regarding different dates! I checked the download/processing time for retrieving the Arabic Wikipedia dump, and it took about 3.2 hours. I think that this may be a bit impractical when it comes to working with multiple languages (although I understand that storing those datasets in your Go...
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
202
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.2451881468, -0.1499558538, -0.1770798564, 0.4052179456, 0.0632278919, 0.2607443035, 0.2458159328, 0.1112545654, 0.6888965964, -0.2664471269, -0.021399362, 0.0063386499, 0.0856512487, -0.2514180839, 0.0070183068, -0.180287376, 0.0203863047, -0.1383440495, 0.1124567389, -0.2367...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
> About the date issue, I think it's possible to use another date with > ```python > load_dataset("wikipedia", language="es", date="...", beam_runner="...") > ``` I tried your suggestion about the date and the function does not accept the language and date keywords. I tried both on `nlp` v0.4 and the new `dataset...
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
841
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.2467410266, -0.0418633111, -0.1210311875, 0.3122998178, 0.1007469743, 0.2001997679, 0.1836345345, 0.1085749716, 0.6142449379, -0.2803643346, 0.1189877242, 0.2007733434, 0.0345286764, -0.1363366544, 0.0880576074, -0.2634975016, 0.0546106882, -0.1040419266, 0.030313259, -0.2449...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
Hey @gaguilar , I just found the ["char2subword" paper](https://arxiv.org/pdf/2010.12730.pdf) and I'm really interested in trying it out on own vocabs/datasets like for historical texts (I've already [trained some lms](https://github.com/stefan-it/europeana-bert) on newspaper articles with OCR errors). Do you pla...
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
57
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.3906112909, -0.2458595186, -0.0904508755, 0.4170151949, 0.0234596729, 0.1634708941, 0.1849510521, 0.1755526811, 0.511062026, -0.2657081187, 0.0157992486, -0.1771206856, -0.0163378119, -0.1042659506, 0.0836660936, -0.0257931482, 0.1556573063, -0.0064993016, 0.0507140271, -0.29...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
Hi @stefan-it! Thanks for your interest in our work! We do plan to release the code, but we will make it available once the paper has been published at a conference. Sorry for the inconvenience! Hi @lhoestq, do you have any insights for this issue by any chance? Thanks!
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
49
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.2403036952, -0.2484169155, -0.1631051451, 0.4403240383, 0.1737348735, 0.248884201, 0.149755314, 0.1210620031, 0.7601819634, -0.2220047861, 0.0222221334, 0.0713406652, 0.0954833329, -0.1848978996, 0.1068779603, -0.1616461724, 0.0440613851, -0.1030342877, 0.0729286298, -0.35418...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
This is an issue on the `mwparserfromhell` side. You could try to update `mwparserfromhell` and see if it fixes the issue. If it doesn't we'll have to create an issue on their repo for them to fix it. But first let's see if the latest version of `mwparserfromhell` does the job.
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
51
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.0810053498, -0.1919706911, -0.1455876529, 0.3570189476, 0.1417108178, 0.2244730741, 0.1284716576, 0.1698515713, 0.7591429353, -0.1323027164, 0.12581788, 0.0651479661, 0.0723558441, -0.225641042, 0.1120555848, -0.1160826385, 0.0668617412, -0.0926691666, -0.119749099, -0.330694...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
I think the work around as suggested in the issue [#886] is not working for several languages, such as `id`. For example, I tried all the dates to download dataset for `id` langauge from the following link: (https://github.com/huggingface/datasets/pull/886) [https://dumps.wikimedia.org/idwiki/](https://dumps.wikimedia...
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
274
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.1800151318, -0.1523417681, -0.0989489406, 0.4668172002, 0.1006341204, 0.2423327416, 0.1486939639, 0.1840967834, 0.7133532763, -0.2562918365, -0.0293332133, 0.0879323557, 0.1379878968, -0.0704494938, 0.0778176636, -0.2036104947, 0.0833217949, -0.1366671622, 0.0640652999, -0.22...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
Hi ! The link https://dumps.wikimedia.org/idwiki/20210501/dumpstatus.json seems to be working fine for me. Regarding the time outs, it must come either from an issue on the wikimedia host side, or from your internet connection. Feel free to try again several times.
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
40
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.192068696, -0.2602578402, -0.1416169852, 0.3762248158, 0.1157503203, 0.2121073306, 0.1645561755, 0.1551885605, 0.7062772512, -0.2246860564, 0.1179804206, 0.0496405885, 0.2343843281, -0.2096704245, -0.0094011575, -0.2210128754, 0.0925965533, -0.1953915805, 0.064847596, -0.2729...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
I was trying to download dataset for `es` language, however I am getting the following error: ``` dataset = load_dataset('wikipedia', language='es', date="20210320", beam_runner='DirectRunner') ``` ``` Downloading and preparing dataset wikipedia/20210320.es (download: Unknown size, generated: Unknown size, post...
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
481
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.176354453, -0.1677636653, -0.1718392819, 0.4323387146, 0.170759961, 0.283190757, 0.1634617746, 0.2103995532, 0.7203704119, -0.2296730727, 0.0827716663, 0.0324949622, 0.0824814141, -0.1710961759, 0.1047613844, -0.1762413383, 0.0409067608, -0.1311677098, 0.061952401, -0.3332504...
https://github.com/huggingface/datasets/issues/577
Some languages in wikipedia dataset are not loading
Hi ! This looks related to this issue: https://github.com/huggingface/datasets/issues/1994 Basically the parser that is used (mwparserfromhell) has some issues for some pages in `es`. We already reported some issues for `es` on their repo at https://github.com/earwig/mwparserfromhell/issues/247 but it looks like ther...
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
60
Some languages in wikipedia dataset are not loading Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am load...
[ 0.1454115212, -0.3527691364, -0.1282089949, 0.4522044361, 0.1266883314, 0.2579190433, 0.0820911154, 0.1208297089, 0.6645290256, -0.231767267, 0.0490372255, 0.0963198617, 0.1007603854, -0.1433963478, 0.1095915437, -0.1608641744, 0.0873806104, -0.0652696192, -0.0717891082, -0.294...
https://github.com/huggingface/datasets/issues/575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
Update: The imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. The urls for glue still doesn't work though.
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la...
34
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ...
[ -0.1126363352, -0.0148180583, -0.0514469631, 0.2499046177, 0.2150652558, -0.1080486402, -0.002304506, 0.1171094924, 0.1441772729, -0.1348837018, -0.4256685078, -0.0689159706, 0.1447223872, 0.1289688647, 0.274224788, -0.0121177016, -0.1593027413, -0.0744766966, -0.0916164294, 0....
https://github.com/huggingface/datasets/issues/575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
I am also seeing a similar error when running the following: ``` import nlp dataset = load_dataset('cola') ``` Error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/load.py", line 509, in load_dataset m...
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la...
76
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ...
[ -0.1126363352, -0.0148180583, -0.0514469631, 0.2499046177, 0.2150652558, -0.1080486402, -0.002304506, 0.1171094924, 0.1441772729, -0.1348837018, -0.4256685078, -0.0689159706, 0.1447223872, 0.1289688647, 0.274224788, -0.0121177016, -0.1593027413, -0.0744766966, -0.0916164294, 0....
https://github.com/huggingface/datasets/issues/575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
@jeswan `"cola"` is not a valid dataset identifier (you can check the up-to-date list on https://huggingface.co/datasets) but you can find cola inside glue.
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la...
23
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ...
[ -0.1126363352, -0.0148180583, -0.0514469631, 0.2499046177, 0.2150652558, -0.1080486402, -0.002304506, 0.1171094924, 0.1441772729, -0.1348837018, -0.4256685078, -0.0689159706, 0.1447223872, 0.1289688647, 0.274224788, -0.0121177016, -0.1593027413, -0.0744766966, -0.0916164294, 0....
https://github.com/huggingface/datasets/issues/575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
Hi. Closing this one since #626 updated the glue urls. > 1. Why is it still blocking? Is it still downloading? After downloading it generates the arrow file by iterating through the examples. The number of examples processed by second is shown during the processing (not sure why it was not the case for you) >...
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la...
74
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ...
[ -0.1126363352, -0.0148180583, -0.0514469631, 0.2499046177, 0.2150652558, -0.1080486402, -0.002304506, 0.1171094924, 0.1441772729, -0.1348837018, -0.4256685078, -0.0689159706, 0.1447223872, 0.1289688647, 0.274224788, -0.0121177016, -0.1593027413, -0.0744766966, -0.0916164294, 0....
https://github.com/huggingface/datasets/issues/568
`metric.compute` throws `ArrowInvalid` error
Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ? If is was related to the distributed setup settings it must be fixed. If it was related to empty metric inputs it's going to be fixed in #654
I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0` ``` File "/home/beltagy/trainer.py", line 92, in validation_step rouge_scores = rouge.compute(predictions=generated_str, references=gold_st...
46
`metric.compute` throws `ArrowInvalid` error I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0` ``` File "/home/beltagy/trainer.py", line 92, in validation_step rouge_scores = rouge.comput...
[ -0.4075994492, -0.2417839617, 0.0564273074, 0.2862925529, 0.3149866462, -0.1689804196, -0.1591826826, 0.292385906, -0.1545241177, 0.4045128524, 0.052560553, 0.500080049, -0.1109338403, -0.3641472161, -0.0942386091, -0.1911462247, -0.132249862, 0.0535951033, 0.0623220541, -0.277...
https://github.com/huggingface/datasets/issues/568
`metric.compute` throws `ArrowInvalid` error
Closing this one as it was fixed in #654 Feel free to re-open if you have other questions
I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0` ``` File "/home/beltagy/trainer.py", line 92, in validation_step rouge_scores = rouge.compute(predictions=generated_str, references=gold_st...
18
`metric.compute` throws `ArrowInvalid` error I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0` ``` File "/home/beltagy/trainer.py", line 92, in validation_step rouge_scores = rouge.comput...
[ -0.4075994492, -0.2417839617, 0.0564273074, 0.2862925529, 0.3149866462, -0.1689804196, -0.1591826826, 0.292385906, -0.1545241177, 0.4045128524, 0.052560553, 0.500080049, -0.1109338403, -0.3641472161, -0.0942386091, -0.1911462247, -0.132249862, 0.0535951033, 0.0623220541, -0.277...
https://github.com/huggingface/datasets/issues/565
No module named 'nlp.logging'
Thanks for reporting. Apparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We'll fix that in a new release this week or early next week. Cc @thomwolf Until that, I'd suggest you to download the right bleurt folder from github ([th...
Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic l...
88
No module named 'nlp.logging' Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:4...
[ 0.0177445058, -0.3961700797, -0.0208087768, -0.251164645, 0.2170931399, -0.0493598804, 0.2021473348, 0.3259568512, 0.0938877612, -0.0865907967, 0.1572887301, 0.3769111335, -0.6784251928, 0.1394218802, 0.2663361728, -0.1221128181, -0.141555503, 0.1301272959, 0.4967287779, -0.084...
https://github.com/huggingface/datasets/issues/565
No module named 'nlp.logging'
Actually we can fix this on our side, this script didn't had to be updated. I'll do it in a few minutes
Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic l...
22
No module named 'nlp.logging' Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:4...
[ 0.0177445058, -0.3961700797, -0.0208087768, -0.251164645, 0.2170931399, -0.0493598804, 0.2021473348, 0.3259568512, 0.0938877612, -0.0865907967, 0.1572887301, 0.3769111335, -0.6784251928, 0.1394218802, 0.2663361728, -0.1221128181, -0.141555503, 0.1301272959, 0.4967287779, -0.084...
https://github.com/huggingface/datasets/issues/560
Using custom DownloadConfig results in an error
From my limited understanding, part of the issue seems related to the `prepare_module` and `download_and_prepare` functions each handling the case where no config is passed. For example, `prepare_module` does mutate the object passed and forces the flags `extract_compressed_file` and `force_extract` to `True`. See:...
## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error. ## How to reprodu...
76
Using custom DownloadConfig results in an error ## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` functi...
[ -0.1150192842, 0.1608140171, 0.1123034209, 0.0204853006, -0.0762841702, -0.059214212, 0.2878564596, 0.0386442505, 0.386790961, -0.1992646307, -0.1571435332, 0.3636113405, -0.2807903588, -0.0396445207, 0.13739039, 0.1345222443, -0.3357852399, 0.191671744, 0.2066785842, -0.099246...
https://github.com/huggingface/datasets/issues/560
Using custom DownloadConfig results in an error
Thanks for the report, I'll take a look. What is your specific use-case for providing a DownloadConfig object?
## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error. ## How to reprodu...
18
Using custom DownloadConfig results in an error ## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` functi...
[ -0.1150192842, 0.1608140171, 0.1123034209, 0.0204853006, -0.0762841702, -0.059214212, 0.2878564596, 0.0386442505, 0.386790961, -0.1992646307, -0.1571435332, 0.3636113405, -0.2807903588, -0.0396445207, 0.13739039, 0.1345222443, -0.3357852399, 0.191671744, 0.2066785842, -0.099246...
https://github.com/huggingface/datasets/issues/560
Using custom DownloadConfig results in an error
Thanks. Our use case involves running a training job behind a corporate firewall with no access to any external resources (S3, GCP or other web resources). I was thinking about a 2-steps process: 1) Download the resources / artifacts using some secure corporate channel, ie run `nlp.load_dataset()` without a specifi...
## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error. ## How to reprodu...
157
Using custom DownloadConfig results in an error ## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` functi...
[ -0.1150192842, 0.1608140171, 0.1123034209, 0.0204853006, -0.0762841702, -0.059214212, 0.2878564596, 0.0386442505, 0.386790961, -0.1992646307, -0.1571435332, 0.3636113405, -0.2807903588, -0.0396445207, 0.13739039, 0.1345222443, -0.3357852399, 0.191671744, 0.2066785842, -0.099246...
https://github.com/huggingface/datasets/issues/560
Using custom DownloadConfig results in an error
I see. Probably the easiest way for you would be that we add simple serialization/deserialization methods to the Dataset and DatasetDict objects once the data files have been downloaded and all the dataset is processed. What do you think @lhoestq ?
## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error. ## How to reprodu...
41
Using custom DownloadConfig results in an error ## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` functi...
[ -0.1150192842, 0.1608140171, 0.1123034209, 0.0204853006, -0.0762841702, -0.059214212, 0.2878564596, 0.0386442505, 0.386790961, -0.1992646307, -0.1571435332, 0.3636113405, -0.2807903588, -0.0396445207, 0.13739039, 0.1345222443, -0.3357852399, 0.191671744, 0.2066785842, -0.099246...
https://github.com/huggingface/datasets/issues/554
nlp downloads to its module path
Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle. Does this also happen if you are installing in a virtual environment?
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') ...
30
nlp downloads to its module path I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_d...
[ 0.0326846838, 0.2539581358, 0.149952814, 0.0345959552, 0.2147577852, -0.1813213974, -0.0204592198, -0.0045515588, 0.1024627462, -0.0508645214, 0.1380290091, 0.8880223632, -0.4422508776, 0.0815137252, 0.3362323046, 0.3320015967, -0.1796581745, -0.0002893051, -0.4403926134, -0.02...
https://github.com/huggingface/datasets/issues/554
nlp downloads to its module path
> Indeed this is a know issue with the fact that we try to be compatible with cloupickle. > > Does this also happen if you are installing in a virtual environment? Then it would work, because the package is in a writable path.
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') ...
44
nlp downloads to its module path I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_d...
[ 0.0366532467, 0.2192302048, 0.1506812871, 0.050849773, 0.211217925, -0.182076782, -0.0289931335, -0.0183256734, 0.118990019, -0.0385014489, 0.1243853644, 0.8661038876, -0.4588843286, 0.1108861417, 0.3196983039, 0.3289012313, -0.1757941991, 0.0106613832, -0.396561265, -0.0296088...
https://github.com/huggingface/datasets/issues/554
nlp downloads to its module path
> If it's fine for you then this is the recommended way to solve this issue. I don't want to use a virtual environment, because Nix is fully reproducible, and virtual environments are not. And I am the maintainer of the `transformers` in nixpkgs, so sooner or later I will have to package `nlp`, since it is becoming ...
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') ...
63
nlp downloads to its module path I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_d...
[ 0.0461160503, 0.2342472523, 0.1267919391, 0.029645564, 0.2665547132, -0.1070265099, -0.0495610908, 0.0400404632, 0.0180515759, -0.0675464198, 0.1628863961, 0.9074656963, -0.3907274306, -0.0176894516, 0.4088222682, 0.325348556, -0.1871856004, -0.0095229438, -0.5038885474, -0.010...
https://github.com/huggingface/datasets/issues/554
nlp downloads to its module path
Ok interesting. We could have another check to see if it's possible to download and import the datasets script at another location than the module path. I think this would probably involve tweaking the python system path dynamically. I don't know anything about Nix so if you want to give this a try your self we can ...
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') ...
141
nlp downloads to its module path I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_d...
[ -0.0510635562, 0.3298894465, 0.1181988269, 0.0430238992, 0.2496245801, -0.1069276556, -0.0243598763, 0.0844147205, 0.0435674377, -0.1043163761, 0.1716201156, 0.8944577575, -0.3751602769, 0.049655512, 0.4546214342, 0.3453479409, -0.202418372, -0.0113469111, -0.5702021718, -0.011...
https://github.com/huggingface/datasets/issues/554
nlp downloads to its module path
@danieldk modules are now installed in a different location (by default in the cache directory of the lib, in `~/.cache/huggingface/modules`). You can also change that using the environment variable `HF_MODULES_PATH` Feel free to play with this change from the master branch for now, and let us know if it sounds good...
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') ...
65
nlp downloads to its module path I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_d...
[ 0.0594275743, 0.2854036689, 0.1192257032, 0.0074426564, 0.2506990135, -0.1265824437, -0.0366810076, 0.029547764, 0.0179713983, -0.1192382798, 0.1451896876, 0.9340398908, -0.3485210836, -0.0550438426, 0.4481515586, 0.3217202723, -0.1729899049, -0.0191878676, -0.5424787402, -0.00...
https://github.com/huggingface/datasets/issues/554
nlp downloads to its module path
> Feel free to play with this change from the master branch for now, and let us know if it sounds good for you :) > We plan to do a release in the next coming days Thanks for making this change! I just packaged the latest commit on master and it works like a charm now! :partying_face:
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') ...
58
nlp downloads to its module path I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_d...
[ 0.0606999919, 0.2831607163, 0.1252469122, 0.0024891754, 0.2875499725, -0.1234021932, -0.0438720919, 0.049794361, 0.0141633824, -0.1077145413, 0.1856339276, 0.9319190383, -0.3687045276, -0.0051250854, 0.4456731081, 0.3321509063, -0.1585526913, -0.0046791146, -0.5357557535, -0.02...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset. Note that as soon as the conversion has been done once, the next time you'll load the dataset it will be much faster. ...
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
88
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.2580245435, -0.1429565847, -0.0743959472, 0.2015652657, -0.119244881, 0.068881616, 0.0702810213, 0.4482385516, 0.1473061293, -0.2636477053, 0.148645252, 0.2279232144, -0.1069060862, 0.0611825734, 0.1534855515, 0.074218303, -0.0200046897, 0.2010740042, -0.1128237247, -0.15188...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
Humm, we can give a look at these large scale datasets indeed. Do you mind sharing a few stats on your dataset so I can try to test on a similar one? In particular some orders of magnitudes for the number of files, number of lines per files, line lengths.
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
50
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.2486387044, -0.2108438611, -0.1305086613, 0.2715973854, -0.0312595665, 0.0532568321, 0.1728607118, 0.3125979006, 0.3433540761, -0.2542507052, 0.1040126681, 0.2262746692, -0.1480087638, 0.2491927892, 0.1742841899, 0.0440120623, -0.0694731101, 0.1338011324, -0.0940503851, -0.2...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
@lhoestq Yes, I understand that the first time requires more time. The concatenate_datasets seems to be a workaround, but I believe a multi-processing method should be integrated into load_dataset to make it easier and more efficient for users. @thomwolf Sure, here are the statistics: Number of lines: 4.2 Billion ...
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
79
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.2211725414, -0.0693994984, -0.0759154335, 0.1695171595, -0.0901313871, 0.100821197, 0.2206685692, 0.3593853414, 0.2234857082, -0.187432304, 0.1198084429, 0.2741488814, -0.047421854, 0.258961916, 0.1557989717, 0.103120029, -0.0261334665, 0.2411929667, 0.042060595, -0.09411446...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
@agemagician you can give a try at a multithreaded version if you want (currently on the #548). To test it, you just need to copy the new `text` processing script which is [here](https://github.com/huggingface/nlp/blob/07d92a82b7594498ff702f3cca55c074e2052257/datasets/text/text.py) somewhere on your drive and give i...
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
76
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.2351230681, -0.1589971334, -0.1092257351, 0.1558211297, -0.0522062071, 0.0935265794, 0.210172534, 0.3627996147, 0.2417386472, -0.1184582636, 0.093715474, 0.2659690976, -0.1745842993, 0.2608829141, 0.2182200253, 0.1295302808, -0.050927449, 0.1763875335, 0.03248135, -0.0707733...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
I have already generated the dataset, but now I tried to reload it and it is still very slow. I also have installed your commit and it is slow, even after the dataset was already generated. `pip install git+https://github.com/huggingface/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257` It uses only a single thr...
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
50
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.1991715729, -0.1527013481, -0.0914235786, 0.185031116, -0.0580272824, 0.0069909063, 0.0542507544, 0.3648396432, 0.2267757505, -0.1864668876, 0.1637938023, 0.2714455426, -0.0985697359, 0.1658661813, 0.1985836923, 0.1851771623, -0.0191419739, 0.2842983603, 0.1013760567, -0.188...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
As mentioned in #548 , each time you call `load_dataset` with `data_files=`, they are hashed to get the cache directory name. Hashing can be too slow with 1TB of data. I feel like we should have a faster way of getting a hash that identifies the input data files
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
49
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.1172167063, -0.036058031, -0.0959250182, 0.3137769103, 0.0166610032, 0.1080764085, 0.1881458759, 0.5626513958, 0.3573105335, -0.0834074467, 0.0841231793, 0.0788890868, -0.27516523, 0.1463638693, 0.3032571971, 0.1522520185, 0.0406834967, 0.2266364843, 0.0538094901, -0.1176898...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
I believe this is really a very important feature, otherwise, we will still have the issue of too slow loading problems even if the data cache generation is fast.
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
29
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.170883745, -0.1193053871, -0.1088861376, 0.2128689438, -0.0477886461, 0.0982000306, 0.1322443932, 0.4315572977, 0.3260003626, -0.1955103427, 0.1550403386, 0.2180577517, -0.1044814363, 0.1303239912, 0.2459830344, 0.0549582206, 0.0207398664, 0.2710025012, 0.0549228787, -0.1215...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
Hmm ok then maybe it's the hashing step indeed. Let's see if we can improve this as well. (you will very likely have to regenerate your dataset if we change this part of the lib though since I expect modifications on this part of the lib to results in new hashes)
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
51
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.1601841748, -0.1073321849, -0.1088038385, 0.2356133163, 0.0101084784, 0.0855073482, 0.0967390686, 0.4174630344, 0.2409226298, -0.2387266606, 0.178845793, 0.2212672234, -0.1573329568, 0.2012868375, 0.2440311462, 0.0954289809, -0.0814829394, 0.2566319406, -0.0297541749, -0.150...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
Also, @agemagician you have to follow the step I indicate in my previous message [here](https://github.com/huggingface/nlp/issues/546#issuecomment-684648927) to use the new text loading script. Just doing `pip install git+https://github.com/huggingface/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257` like you did w...
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
46
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.1538135111, -0.2194946408, -0.0950481445, 0.1538351625, -0.0239239875, 0.0457140319, 0.1359890997, 0.4179363251, 0.3139535189, -0.1768574417, 0.135254249, 0.2417797893, -0.1323707402, 0.2570714355, 0.2852428854, 0.0912736505, -0.0661660284, 0.2590824068, 0.0090139695, -0.117...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
No problem, I will regenerate it. This will make us see if we solved both issues and now both the data generation step, as well as the hashing step, is fast.
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
31
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.1519209892, -0.1488329917, -0.1157124862, 0.2396363914, 0.0376228802, 0.0494574755, 0.1378421634, 0.4157026112, 0.2257431746, -0.2202196121, 0.1811742634, 0.2084629238, -0.1486447155, 0.2225389332, 0.2400525063, 0.1334553808, -0.0106433444, 0.2384752482, -0.0394379757, -0.18...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
Ok so now the text files won't be hashed. I also updated #548 to include this change. Let us know if it helps @agemagician :)
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
25
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.1468622535, -0.1677477509, -0.1159890369, 0.250762552, 0.0116757657, 0.1286736131, 0.2028606683, 0.4360564947, 0.2853425145, -0.2157855779, 0.1411367953, 0.1851584166, -0.1777909696, 0.1772631854, 0.2101213485, 0.1729488224, -0.0373953171, 0.2307506651, 0.0664633363, -0.1615...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
Right now, for caching 18Gb data, it is taking 1 hour 10 minute. Is that proper expected time? @lhoestq @agemagician In this rate (assuming large file will caching at the same rate) caching full mC4 (27TB) requires a month (~26 days).
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
41
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.2007326782, -0.1194672585, -0.096519433, 0.210425958, -0.0640925542, 0.0504139774, 0.0429918803, 0.3229283094, 0.2590810657, -0.3136386573, 0.1074275821, 0.122472778, -0.1483515501, 0.1026051193, 0.153373912, 0.1431350857, 0.0386224836, 0.2054866552, 0.0906055942, -0.1987738...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
Hi ! Currently it is that slow because we haven't implemented parallelism for the dataset generation yet. Though we will definitely work on this :) For now I'd recommend loading the dataset shard by shard in parallel, and then concatenate them: ```python # in one process, load first 100 files for english shard1 ...
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
75
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.2651072145, -0.1227213144, -0.0954299495, 0.1771879941, -0.097576499, 0.0928757787, 0.2035612017, 0.3488459885, 0.1298840344, -0.1564109772, 0.0786781907, 0.1867337078, -0.0765857846, 0.2162315845, 0.1551424116, 0.0474083535, -0.002900609, 0.1883881539, 0.0590902641, -0.1620...
https://github.com/huggingface/datasets/issues/546
Very slow data loading on large dataset
Sorry to write on a closed issue but, has there been any progress on parallelizing the `load_dataset` function?
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
18
Very slow data loading on large dataset I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread durin...
[ -0.2692455053, -0.1433455348, -0.1251460463, 0.2536851764, -0.1512008607, 0.1418294311, 0.1543622315, 0.4112132788, 0.2585251033, -0.2052987963, 0.1103186756, 0.3024187386, -0.0668151602, 0.1175145805, 0.1575470567, 0.0696561038, -0.0306314845, 0.1984196007, 0.1373993456, -0.11...
https://github.com/huggingface/datasets/issues/539
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
Hi @gaguilar If you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) and runni...
Hi, There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. How can I update the checksum of the library to solve this issue? The error is below and it also appea...
68
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data Hi, There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. How can I update t...
[ -0.0616524667, 0.4437026083, -0.0581614822, 0.0664169863, -0.3072106838, 0.1153275073, -0.2732501626, 0.5395029187, -0.0016356849, -0.1196582317, 0.0857632756, 0.2251151651, 0.0789774805, -0.1222944558, 0.0061916867, 0.1934712231, 0.0321504772, 0.0933848843, 0.0987023488, 0.039...
https://github.com/huggingface/datasets/issues/539
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
Hi @thomwolf Thanks for the details! I just created a PR with the updated `dataset_infos.json` file (#550).
Hi, There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. How can I update the checksum of the library to solve this issue? The error is below and it also appea...
17
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data Hi, There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. How can I update t...
[ -0.0616524667, 0.4437026083, -0.0581614822, 0.0664169863, -0.3072106838, 0.1153275073, -0.2732501626, 0.5395029187, -0.0016356849, -0.1196582317, 0.0857632756, 0.2251151651, 0.0789774805, -0.1222944558, 0.0061916867, 0.1934712231, 0.0321504772, 0.0933848843, 0.0987023488, 0.039...
https://github.com/huggingface/datasets/issues/537
[Dataset] RACE dataset Checksums error
`NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one. Either the file you downloaded was corrupted along the way, or the host updated the file. Could you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an update i...
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
68
[Dataset] RACE dataset Checksums error Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ...
[ -0.3484401405, 0.3749223948, -0.0223543998, 0.2424075752, 0.2333744466, 0.0591793843, 0.2440312356, 0.467477411, 0.2553078532, -0.1403689235, -0.0667248443, -0.0904461369, -0.3057557344, 0.1228692532, 0.0535941124, -0.0088579645, -0.0977110565, 0.0319081843, -0.1895278394, 0.05...
https://github.com/huggingface/datasets/issues/537
[Dataset] RACE dataset Checksums error
I just cleared the cache an run it again. The error persists ): ``` nlp (master) $ rm -rf /Users/abarbosa/.cache/huggingface/ nlp (master) $ python Python 3.8.5 (default, Aug 5 2020, 03:39:04) [Clang 10.0.0 ] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information. ...
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
147
[Dataset] RACE dataset Checksums error Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ...
[ -0.3484401405, 0.3749223948, -0.0223543998, 0.2424075752, 0.2333744466, 0.0591793843, 0.2440312356, 0.467477411, 0.2553078532, -0.1403689235, -0.0667248443, -0.0904461369, -0.3057557344, 0.1228692532, 0.0535941124, -0.0088579645, -0.0977110565, 0.0319081843, -0.1895278394, 0.05...
https://github.com/huggingface/datasets/issues/537
[Dataset] RACE dataset Checksums error
Dealing with the same issue please update the checksum on nlp library end. The data seems to have changed on their end.
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
22
[Dataset] RACE dataset Checksums error Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ...
[ -0.3484401405, 0.3749223948, -0.0223543998, 0.2424075752, 0.2333744466, 0.0591793843, 0.2440312356, 0.467477411, 0.2553078532, -0.1403689235, -0.0667248443, -0.0904461369, -0.3057557344, 0.1228692532, 0.0535941124, -0.0088579645, -0.0977110565, 0.0319081843, -0.1895278394, 0.05...
https://github.com/huggingface/datasets/issues/537
[Dataset] RACE dataset Checksums error
We have a discussion on this datasets here: https://github.com/huggingface/nlp/pull/540 Feel free to participate if you have some opinion on the scope of data which should be included in this dataset.
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
30
[Dataset] RACE dataset Checksums error Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ...
[ -0.3484401405, 0.3749223948, -0.0223543998, 0.2424075752, 0.2333744466, 0.0591793843, 0.2440312356, 0.467477411, 0.2553078532, -0.1403689235, -0.0667248443, -0.0904461369, -0.3057557344, 0.1228692532, 0.0535941124, -0.0088579645, -0.0977110565, 0.0319081843, -0.1895278394, 0.05...
https://github.com/huggingface/datasets/issues/537
[Dataset] RACE dataset Checksums error
At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia a...
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
61
[Dataset] RACE dataset Checksums error Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ...
[ -0.3484401405, 0.3749223948, -0.0223543998, 0.2424075752, 0.2333744466, 0.0591793843, 0.2440312356, 0.467477411, 0.2553078532, -0.1403689235, -0.0667248443, -0.0904461369, -0.3057557344, 0.1228692532, 0.0535941124, -0.0088579645, -0.0977110565, 0.0319081843, -0.1895278394, 0.05...
https://github.com/huggingface/datasets/issues/537
[Dataset] RACE dataset Checksums error
> At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia...
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
67
[Dataset] RACE dataset Checksums error Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ...
[ -0.3484401405, 0.3749223948, -0.0223543998, 0.2424075752, 0.2333744466, 0.0591793843, 0.2440312356, 0.467477411, 0.2553078532, -0.1403689235, -0.0667248443, -0.0904461369, -0.3057557344, 0.1228692532, 0.0535941124, -0.0088579645, -0.0977110565, 0.0319081843, -0.1895278394, 0.05...
https://github.com/huggingface/datasets/issues/537
[Dataset] RACE dataset Checksums error
> > At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikiped...
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
108
[Dataset] RACE dataset Checksums error Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ...
[ -0.3484401405, 0.3749223948, -0.0223543998, 0.2424075752, 0.2333744466, 0.0591793843, 0.2440312356, 0.467477411, 0.2553078532, -0.1403689235, -0.0667248443, -0.0904461369, -0.3057557344, 0.1228692532, 0.0535941124, -0.0088579645, -0.0977110565, 0.0319081843, -0.1895278394, 0.05...
https://github.com/huggingface/datasets/issues/534
`list_datasets()` is broken.
Thanks for reporting ! This has been fixed in #475 and the fix will be available in the next release
version = '0.4.0' `list_datasets()` is broken. It results in the following error : ``` In [3]: nlp.list_datasets() Out[3]: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/.virtualenvs/san-lgUCsFg_/lib/py...
20
`list_datasets()` is broken. version = '0.4.0' `list_datasets()` is broken. It results in the following error : ``` In [3]: nlp.list_datasets() Out[3]: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/....
[ -0.292550683, 0.1287723631, -0.1103175357, 0.2530121505, 0.1547409296, 0.0697495937, 0.3828647435, 0.4221463203, -0.0766395479, -0.0494418666, -0.1751875281, 0.4431256652, -0.2499772906, -0.0311815515, -0.0670694932, -0.4052232802, -0.0786971152, 0.3645732403, 0.0530653261, 0.1...