html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
63
51.8k
body
stringlengths
0
36.2k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
This seems to be on the `transformers` library side. If you have more informations (pip env) or even better, a colab reproducing the error we can investigate.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
27
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
It seems like it's solved with freshed versions of transformers. I have tried to replicate the error doing a fresh pip install transformers & datasets on colab and the error doesn't continue. On colab it keeps stable on 5GB! (Y) Edit: **Thanks for your great work**. Have a good day.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
50
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@gaceladri witch version transformers and datasets are you using now? I want to try again. Thanks.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
16
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
It's happening to me again. After 4 hours of pre-training, my ram memory gets full and the kernel dies. I am using the last transformers version as today. 4.4.0 and the last version of datasets 1.2.1, both installed from master. The memory consumption keeps increasing.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
45
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Thanks for the investigation @gaceladri Apparently this happens when `num_workers>0` and has to do with objects being copied-on-write. Did you try setting num_workers to 0 @gaceladri ? If the issue doesn't happen with `num_workers=0` then this would confirm that it's indeed related to this python/pytorch issue. ...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
114
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Hmmm so this might come from another issue... Since it doesn't seem to be related to multiprocessing it should be easier to investigate though. Do you have some ideas @gaceladri ?
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
31
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq I looked quickly to a previously spoted bug in my env wandb /sdk/interface/interface.py, because sometimes when I load the dataset I got a multiprocessing error at line 510 in wandb...interface.py This bug is reported here https://github.com/huggingface/datasets/issues/847 ``` --------------------------...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
396
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq But despite this, I got lost into the [class Dataset()](https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset) reading the pyarrow files. Edit: but you should be rigth, that it does not have to be related to multiprocessing since it keeps happening when `num_workers=0`
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
37
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Or maybe wandb uses multiprocessing ? One process for wandb logging and one for actual training ? If this is the case then even setting `num_workers=0` would cause the process to be forked for wandb and therefore cause the memory issue.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
41
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq could be, but if we set wandb to false this should not happen. I am going to try.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
19
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq It keeps happening. I have uninstalled wandb from my env, setted `%env WANDB_DISABLED=true` on my notebook, and commented this func: ``` def get_available_reporting_integrations(): integrations = [] if is_azureml_available(): integrations.append("azure_ml") if is_comet_available(): ...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
65
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Thanks for checking @gaceladri . Let's investigate the single process setting then. If you have some sort of colab notebook with a minimal code example that shows this behavior feel free to share it @gaceladri so that we can play around with it to find what causes this. Otherwise I'll probably try to reproduce on my s...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
60
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq sure. Here you have https://colab.research.google.com/drive/1ba09ZOpyHGAOQLcsxiQAHRXl10qnMU5o?usp=sharing let me know if the link works and it reproduces the issue. To me, it reproduces the issue, since if you start the training the ram memory keeps increasing. Let me know. Thanks!
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
39
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Could the bug be comming from tokenizers? I got this warning at the terminal from my jupyter notebook: ``` huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `to...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
63
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
I've never experienced memory issues with tokenizers so I don't know Cc @n1t0 are you aware of any issue that would cause memory to keep increasing when the tokenizer is used in the Data Collator for language modeling ?
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
39
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq Thanks for pointing to n1t0, just to clarify. That warning was doing fine-tuning, without collator: ``` from datasets import load_dataset, load_metric import numpy as np GLUE_TASKS = [ "cola", "mnli", "mnli-mm", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", ...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
468
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Thanks for sharing your results. So you still had the issue for fine-tuning ? And the issue still appears with a bare-bone dataset from an arrow file...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
27
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Yes, on both cases. Fine-tuning a pre-trained model and pre-training from scratch with a local arrow file already pre-processed.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
19
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
[ -0.6339287758, -0.4775367975, 0.0106936339, 0.2986355424, 0.3600475788, -0.1518250853, 0.5567325354, 0.3738059103, 0.0108824177, 0.0107197072, -0.1295929253, -0.18279998, -0.2669847012, -0.1620898843, -0.0323720984, 0.0232064184, -0.0998212174, 0.1968754083, -0.2772670686, -0.0...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
Basically ~600MB txt files(UTF-8) * 59. contents like ```안녕하세요, 이것은 예제로 한번 말해보는 텍스트입니다. 그냥 이렇다고요.<|endoftext|>\n``` Also, it gets stuck for a loooong time at ```Testing the mapped function outputs```, for more than 12 hours(currently ongoing)
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
36
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
[ -0.4925668538, -0.2310243845, -0.119864285, 0.2836015522, 0.4663561881, -0.0735015199, 0.3053680658, 0.5961021781, -0.1138257012, 0.0461648479, -0.0624196492, -0.030405255, -0.1033420116, 0.3101792932, -0.106213443, -0.0389503799, -0.2278844565, 0.1130641326, -0.0946917832, 0.0...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
It gets stuck while doing `.map()` ? Are you using multiprocessing ? If you could provide a code snippet it could be very useful
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
24
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
[ -0.4925668538, -0.2310243845, -0.119864285, 0.2836015522, 0.4663561881, -0.0735015199, 0.3053680658, 0.5961021781, -0.1138257012, 0.0461648479, -0.0624196492, -0.030405255, -0.1033420116, 0.3101792932, -0.106213443, -0.0389503799, -0.2278844565, 0.1130641326, -0.0946917832, 0.0...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
From transformers/examples/language-modeling/run-language-modeling.py : ``` def get_dataset( args: DataTrainingArguments, tokenizer: PreTrainedTokenizer, evaluate: bool = False, cache_dir: Optional[str] = None, ): file_path = args.eval_data_file if evaluate else args.train_data_file if ...
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
71
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
[ -0.4925668538, -0.2310243845, -0.119864285, 0.2836015522, 0.4663561881, -0.0735015199, 0.3053680658, 0.5961021781, -0.1138257012, 0.0461648479, -0.0624196492, -0.030405255, -0.1033420116, 0.3101792932, -0.106213443, -0.0389503799, -0.2278844565, 0.1130641326, -0.0946917832, 0.0...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
I am not able to reproduce on my side :/ Could you send the version of `datasets` and `pyarrow` you're using ? Could you try to update the lib and try again ? Or do you think you could try to reproduce it on google colab ?
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
47
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
[ -0.4925668538, -0.2310243845, -0.119864285, 0.2836015522, 0.4663561881, -0.0735015199, 0.3053680658, 0.5961021781, -0.1138257012, 0.0461648479, -0.0624196492, -0.030405255, -0.1033420116, 0.3101792932, -0.106213443, -0.0389503799, -0.2278844565, 0.1130641326, -0.0946917832, 0.0...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
Huh, weird. It's fixed on my side too. But now ```Caching processed dataset``` is taking forever - how can I disable it? Any flags?
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
24
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
[ -0.4925668538, -0.2310243845, -0.119864285, 0.2836015522, 0.4663561881, -0.0735015199, 0.3053680658, 0.5961021781, -0.1138257012, 0.0461648479, -0.0624196492, -0.030405255, -0.1033420116, 0.3101792932, -0.106213443, -0.0389503799, -0.2278844565, 0.1130641326, -0.0946917832, 0.0...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
Right after `Caching processed dataset`, your function is applied to the dataset and there's a progress bar that shows how much time is left. How much time does it take for you ? Also caching isn't supposed to slow down your processing. But if you still want to disable it you can do `.map(..., load_from_cache_file=F...
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
55
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
[ -0.4925668538, -0.2310243845, -0.119864285, 0.2836015522, 0.4663561881, -0.0735015199, 0.3053680658, 0.5961021781, -0.1138257012, 0.0461648479, -0.0624196492, -0.030405255, -0.1033420116, 0.3101792932, -0.106213443, -0.0389503799, -0.2278844565, 0.1130641326, -0.0946917832, 0.0...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
Ah, it’s much faster now(Takes around 15~20min). BTW, any way to set default tensor output as plain tensors with distributed training? The ragged tensors are incompatible with tpustrategy :(
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
29
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
[ -0.4925668538, -0.2310243845, -0.119864285, 0.2836015522, 0.4663561881, -0.0735015199, 0.3053680658, 0.5961021781, -0.1138257012, 0.0461648479, -0.0624196492, -0.030405255, -0.1033420116, 0.3101792932, -0.106213443, -0.0389503799, -0.2278844565, 0.1130641326, -0.0946917832, 0.0...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
> Ah, it’s much faster now(Takes around 15~20min). Glad to see that it's faster now. What did you change exactly ? > BTW, any way to set default tensor output as plain tensors with distributed training? The ragged tensors are incompatible with tpustrategy :( Oh I didn't know about that. Feel free to open an is...
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
92
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
[ -0.4925668538, -0.2310243845, -0.119864285, 0.2836015522, 0.4663561881, -0.0735015199, 0.3053680658, 0.5961021781, -0.1138257012, 0.0461648479, -0.0624196492, -0.030405255, -0.1033420116, 0.3101792932, -0.106213443, -0.0389503799, -0.2278844565, 0.1130641326, -0.0946917832, 0.0...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
>>> Glad to see that it's faster now. What did you change exactly ? I don't know, it just worked...? Sorry I couldn't be more helpful. Setting with numpy array is a great idea! Thanks.
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
35
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
[ -0.4925668538, -0.2310243845, -0.119864285, 0.2836015522, 0.4663561881, -0.0735015199, 0.3053680658, 0.5961021781, -0.1138257012, 0.0461648479, -0.0624196492, -0.030405255, -0.1033420116, 0.3101792932, -0.106213443, -0.0389503799, -0.2278844565, 0.1130641326, -0.0946917832, 0.0...
https://github.com/huggingface/datasets/issues/625
dtype of tensors should be preserved
Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference.. And then for y...
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
156
dtype of tensors should be preserved After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-...
[ -0.1134336665, -0.221115008, -0.0097108139, 0.2073050439, 0.5532283783, 0.1730132401, 0.5313699245, 0.1225807741, 0.1504824311, -0.0665397719, -0.0843991637, 0.2457148433, -0.117551893, -0.1751451641, 0.1026155949, -0.2025275528, 0.2282531559, -0.0652568489, -0.1441735029, -0.2...
https://github.com/huggingface/datasets/issues/625
dtype of tensors should be preserved
If the arrow format is basically lists, why is the intermediate step to numpy necessary? I am a bit confused about that part. Thanks for your suggestion. as I have currently implemented this, I cast to torch.Tensor in my collate_fn to save disk space (so I do not have to save padded tensors to max_len but can pad up...
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
89
dtype of tensors should be preserved After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-...
[ -0.1134336665, -0.221115008, -0.0097108139, 0.2073050439, 0.5532283783, 0.1730132401, 0.5313699245, 0.1225807741, 0.1504824311, -0.0665397719, -0.0843991637, 0.2457148433, -0.117551893, -0.1751451641, 0.1026155949, -0.2025275528, 0.2282531559, -0.0652568489, -0.1441735029, -0.2...
https://github.com/huggingface/datasets/issues/625
dtype of tensors should be preserved
I'm glad you managed to figure something out :) Casting from arrow to numpy can be 100x faster than casting from arrow to list. This is because arrow has an integration with numpy that allows it to instantiate numpy arrays with zero-copy from arrow. On the other hand to create python lists it is slow since it has ...
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
70
dtype of tensors should be preserved After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-...
[ -0.1134336665, -0.221115008, -0.0097108139, 0.2073050439, 0.5532283783, 0.1730132401, 0.5313699245, 0.1225807741, 0.1504824311, -0.0665397719, -0.0843991637, 0.2457148433, -0.117551893, -0.1751451641, 0.1026155949, -0.2025275528, 0.2282531559, -0.0652568489, -0.1441735029, -0.2...
https://github.com/huggingface/datasets/issues/625
dtype of tensors should be preserved
I encountered a simliar issue: `datasets` converted my float numpy array to `torch.float64` tensors, while many pytorch operations require `torch.float32` inputs and it's very troublesome. I tried @lhoestq 's solution, but since it's mixed with the preprocess function, it's not very intuitive. I just want to sh...
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
96
dtype of tensors should be preserved After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-...
[ -0.1134336665, -0.221115008, -0.0097108139, 0.2073050439, 0.5532283783, 0.1730132401, 0.5313699245, 0.1225807741, 0.1504824311, -0.0665397719, -0.0843991637, 0.2457148433, -0.117551893, -0.1751451641, 0.1026155949, -0.2025275528, 0.2282531559, -0.0652568489, -0.1441735029, -0.2...
https://github.com/huggingface/datasets/issues/625
dtype of tensors should be preserved
Reopening since @bhavitvyamalik started looking into it ! Also I'm posting here a function that could be helpful to support preserving the dtype of tensors. It's used to build a pyarrow array out of a numpy array and: - it doesn't convert the numpy array to a python list - it keeps the precision of the numpy ar...
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
206
dtype of tensors should be preserved After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-...
[ -0.1134336665, -0.221115008, -0.0097108139, 0.2073050439, 0.5532283783, 0.1730132401, 0.5313699245, 0.1225807741, 0.1504824311, -0.0665397719, -0.0843991637, 0.2457148433, -0.117551893, -0.1751451641, 0.1026155949, -0.2025275528, 0.2282531559, -0.0652568489, -0.1441735029, -0.2...
https://github.com/huggingface/datasets/issues/625
dtype of tensors should be preserved
@lhoestq Have you thought about this further? We have a use case where we're attempting to load data containing numpy arrays using the `datasets` library. When using one of the "standard" methods (`[Value(...)]` or `Sequence()`) we see ~200 samples processed per second during the call to `_prepare_split`. This sl...
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
239
dtype of tensors should be preserved After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-...
[ -0.1134336665, -0.221115008, -0.0097108139, 0.2073050439, 0.5532283783, 0.1730132401, 0.5313699245, 0.1225807741, 0.1504824311, -0.0665397719, -0.0843991637, 0.2457148433, -0.117551893, -0.1751451641, 0.1026155949, -0.2025275528, 0.2282531559, -0.0652568489, -0.1441735029, -0.2...
https://github.com/huggingface/datasets/issues/625
dtype of tensors should be preserved
Hi ! It would be awesome to achieve this speed for numpy arrays ! For now we have to use `encode_nested_example` to convert numpy arrays to python lists since pyarrow doesn't support multidimensional numpy arrays (only 1D). Maybe let's start a new PR from your PR @bhavitvyamalik (idk why we didn't answer your PR...
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
185
dtype of tensors should be preserved After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-...
[ -0.1134336665, -0.221115008, -0.0097108139, 0.2073050439, 0.5532283783, 0.1730132401, 0.5313699245, 0.1225807741, 0.1504824311, -0.0665397719, -0.0843991637, 0.2457148433, -0.117551893, -0.1751451641, 0.1026155949, -0.2025275528, 0.2282531559, -0.0652568489, -0.1441735029, -0.2...
https://github.com/huggingface/datasets/issues/623
Custom feature types in `load_dataset` from CSV
Currently `csv` doesn't support the `features` attribute (unlike `json`). What you can do for now is cast the features using the in-place transform `cast_` ```python from datasets import load_dataset dataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label']) dataset.cast...
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the followi...
38
Custom feature types in `load_dataset` from CSV I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotio...
[ 0.0802028924, -0.2782892585, -0.0531790853, 0.3509229124, 0.3172289729, -0.1943105757, 0.5701336265, 0.1113842726, 0.446125567, 0.0253306851, 0.0947441235, 0.3161779642, -0.0919102132, 0.3901152909, -0.0581882037, 0.0267203413, -0.1612748355, 0.3348048329, -0.0091214385, -0.349...
https://github.com/huggingface/datasets/issues/623
Custom feature types in `load_dataset` from CSV
Hi @lhoestq we've tried out your suggestion but are now running into the following error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-163-81ffd5ac18c9> in <module> ----> 1 dataset.cast_(...
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the followi...
168
Custom feature types in `load_dataset` from CSV I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotio...
[ 0.0802028924, -0.2782892585, -0.0531790853, 0.3509229124, 0.3172289729, -0.1943105757, 0.5701336265, 0.1113842726, 0.446125567, 0.0253306851, 0.0947441235, 0.3161779642, -0.0919102132, 0.3901152909, -0.0581882037, 0.0267203413, -0.1612748355, 0.3348048329, -0.0091214385, -0.349...
https://github.com/huggingface/datasets/issues/623
Custom feature types in `load_dataset` from CSV
In general, I don't think there is any hard reason we don't allow to use `features` in the csv script, right @lhoestq? Should I add it?
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the followi...
26
Custom feature types in `load_dataset` from CSV I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotio...
[ 0.0802028924, -0.2782892585, -0.0531790853, 0.3509229124, 0.3172289729, -0.1943105757, 0.5701336265, 0.1113842726, 0.446125567, 0.0253306851, 0.0947441235, 0.3161779642, -0.0919102132, 0.3901152909, -0.0581882037, 0.0267203413, -0.1612748355, 0.3348048329, -0.0091214385, -0.349...
https://github.com/huggingface/datasets/issues/623
Custom feature types in `load_dataset` from CSV
> In general, I don't think there is any hard reason we don't allow to use `features` in the csv script, right @lhoestq? > > Should I add it? Sure let's add it. Setting the convert options should do the job > Hi @lhoestq we've tried out your suggestion but are now running into the following error: > > ``` ...
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the followi...
136
Custom feature types in `load_dataset` from CSV I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotio...
[ 0.0802028924, -0.2782892585, -0.0531790853, 0.3509229124, 0.3172289729, -0.1943105757, 0.5701336265, 0.1113842726, 0.446125567, 0.0253306851, 0.0947441235, 0.3161779642, -0.0919102132, 0.3901152909, -0.0581882037, 0.0267203413, -0.1612748355, 0.3348048329, -0.0091214385, -0.349...
https://github.com/huggingface/datasets/issues/623
Custom feature types in `load_dataset` from CSV
PR is open for the `ValueError: Target schema's field names are not matching the table's field names` error. I'm adding the features parameter to csv
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the followi...
25
Custom feature types in `load_dataset` from CSV I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotio...
[ 0.0802028924, -0.2782892585, -0.0531790853, 0.3509229124, 0.3172289729, -0.1943105757, 0.5701336265, 0.1113842726, 0.446125567, 0.0253306851, 0.0947441235, 0.3161779642, -0.0919102132, 0.3901152909, -0.0581882037, 0.0267203413, -0.1612748355, 0.3348048329, -0.0091214385, -0.349...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
@thomwolf Sure. I'll try downgrading to 3.7 now even though Arrow say they support >=3.5. Linux (Ubuntu 18.04) - Python 3.8 ====================== Package - Version --------------------- certifi 2020.6.20 chardet 3.0.4 click 7.1.2 datasets 1.0.1 di...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
194
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
Downgrading to 3.7 does not help. Here is a dummy text file: ```text Verzekering weigert vaker te betalen Bedrijven van verzekeringen erkennen steeds minder arbeidsongevallen . In 2012 weigerden de bedrijven te betalen voor 21.055 ongevallen op het werk . Dat is 11,8 % van alle ongevallen op het werk . Nog nooi...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
120
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
@banunitte Please do not post screenshots in the future but copy-paste your code and the errors. That allows others to copy-and-paste your code and test it. You may also want to provide the Python version that you are using.
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
39
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
I have the same problem on Linux of the script crashing with a CSV error. This may be caused by 'CRLF', when changed 'CRLF' to 'LF', the problem solved.
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
29
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
I pushed a fix for `pyarrow.lib.ArrowInvalid: CSV parse error`. Let me know if you still have this issue. Not sure about the windows one yet
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
25
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
To complete what @lhoestq is saying, I think that to use the new version of the `text` processing script (which is on master right now) you need to either specify the version of the script to be the `master` one or to install the lib from source (in which case it uses the `master` version of the script by default): ``...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
107
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
![image](https://user-images.githubusercontent.com/36957508/93300760-fa9a8680-f829-11ea-9105-7a6f67ad8373.png) win10, py3.6 ``` from datasets import Features, Value, ClassLabel, load_dataset features = Features({'text': Value('string'), 'ctext': Value('string')}) file_dict = {'train': PATH/'summary.csv'} ...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
31
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
```python Traceback` (most recent call last): File "main.py", line 281, in <module> main() File "main.py", line 190, in main train_data, test_data = data_factory( File "main.py", line 129, in data_factory train_data = load_dataset('text', File "/home/me/Downloads/datasets/src/datasets/load....
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
135
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
> ![image](https://user-images.githubusercontent.com/36957508/93300760-fa9a8680-f829-11ea-9105-7a6f67ad8373.png) > win10, py3.6 > > ``` > from datasets import Features, Value, ClassLabel, load_dataset > > > features = Features({'text': Value('string'), 'ctext': Value('string')}) > file_dict = {'train': PATH/...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
184
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
> To complete what @lhoestq is saying, I think that to use the new version of the `text` processing script (which is on master right now) you need to either specify the version of the script to be the `master` one or to install the lib from source (in which case it uses the `master` version of the script by default): ...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
206
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
Hi @raruidol To fix the RAM issue you'll need to shard your text files into smaller files (see https://github.com/huggingface/datasets/issues/610#issuecomment-691672919 for example) I'm not sure why you're having the csv error on linux. Do you think you could to to reproduce it on google colab for example ? Or s...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
59
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
@lhoestq The crash message shows up when loading the dataset: ``` print('Loading corpus...') files = glob.glob('corpora/shards/*') -> dataset = load_dataset('text', script_version='master', data_files=files) print('Corpus loaded.') ``` And this is the exact message: ``` Traceback (most recent call last)...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
207
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
I tested on google colab which is also linux using this code: - first download an arbitrary text file ```bash wget https://raw.githubusercontent.com/abisee/cnn-dailymail/master/url_lists/all_train.txt ``` - then run ```python from datasets import load_dataset d = load_dataset("text", data_files="all_train.t...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
156
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
Update: also tested the above code in a docker container from [jupyter/minimal-notebook](https://hub.docker.com/r/jupyter/minimal-notebook/) (based on ubuntu) and still not able to reproduce
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
21
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
It looks like with your text input file works without any problem. I have been doing some experiments this morning with my input files and I'm almost certain that the crash is caused by some unexpected pattern in the files. However, I've not been able to spot the main cause of it. What I find strange is that this same ...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
92
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
Under the hood it does ```python import pyarrow as pa import pyarrow.csv # Use csv reader from Pyarrow with one column for text files # To force the one-column setting, we set an arbitrary character # that is not in text files as delimiter, such as \b or \v. # The bell character, \b, was used to make beeps b...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
107
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
Could you try with `\a` instead of `\b` ? It looks like the bell character is \a in python and not \b
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
22
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
I was just exploring if the crash was happening in every shard or not, and which shards were generating the error message. With \b I got the following list of shards crashing: ``` Errors on files: ['corpora/shards/shard_0069', 'corpora/shards/shard_0043', 'corpora/shards/shard_0014', 'corpora/shards/shard_0032', '...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
205
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
Hmmm I was expecting it to work with \a, not sure why they appear in your text files though
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
19
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
Hi @lhoestq, is there any input length restriction which was not before the update of the nlp library?
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
18
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
No we never set any input length restriction on our side (maybe arrow but I don't think so)
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
18
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
@lhoestq Can you ever be certain that a delimiter character is not present in a plain text file? In other formats (e.g. CSV) , rules are set of what is allowed and what isn't so that it actually constitutes a CSV file. In a text file you basically have "anything goes", so I don't think you can ever be entirely sure tha...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
118
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
Okay, I have splitted the crashing shards into individual sentences and some examples of the inputs that are causing the crashes are the following ones: _4. DE L’ORGANITZACIÓ ESTAMENTAL A L’ORGANITZACIÓ EN CLASSES A mesura que es desenvolupava un sistema econòmic capitalista i naixia una classe burgesa cada vegada...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
949
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
So we're using the csv reader to read text files because arrow doesn't have a text reader. To workaround the fact that text files are just csv with one column, we want to set a delimiter that doesn't appear in text files. Until now I thought that it would do the job but unfortunately it looks like even characters lik...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
289
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
> Okay, I have splitted the crashing shards into individual sentences and some examples of the inputs that are causing the crashes are the following ones Thanks for digging into it ! Characters like \a or \b are not shown when printing the text, so as it is I can't tell if it contains unexpected characters. Mayb...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
178
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
That's true, It was my error printing the text that way. Maybe as a workaround, I can force all my input samples to have "\b" at the end?
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
28
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/622
load_dataset for text files not working
> That's true, It was my error printing the text that way. Maybe as a workaround, I can force all my input samples to have "\b" at the end? I don't think it would work since we only want one column, and "\b" is set to be the delimiter between two columns, so it will raise the same issue again. Pyarrow would think th...
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
96
load_dataset for text files not working Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loa...
[ -0.2746653557, -0.4020572305, 0.0175604671, 0.3872526288, 0.2696422935, -0.0386611708, 0.3188883066, -0.0543564558, 0.4263593853, -0.0580489412, 0.0659723133, 0.1455249637, -0.155762881, 0.2742005587, 0.0635563657, -0.0350760669, 0.1571834832, -0.0138411364, -0.2914434075, 0.04...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
It seems that I ran into the same problem ``` def tokenize(cols, example): for in_col, out_col in cols.items(): example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col])) return example cola = datasets.load_dataset('glue', 'cola') tokenized_cola = cola.map(partial(token...
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
121
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4042772651, -0.076164633, 0.0068099909, 0.1466705948, 0.1475023031, -0.189604491, 0.322409153, 0.3429277539, 0.1341746151, 0.1369511783, 0.0367481448, 0.3943180442, -0.4054726958, 0.310338825, -0.3056830764, 0.0618968271, 0.1283406019, -0.0522315875, -0.2669126093, 0.1578020...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
same problem. `encoded_dataset = core_data.map(lambda examples: tokenizer(examples["query"], examples["document"], padding=True, truncation='longest_first', return_tensors="pt", max_length=384), num_proc=16, keep_in_memory=True)` it outputs: ``` Set __getitem__(key) output type to python objects for ['document', 'i...
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
301
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.3812628686, -0.1599826813, -0.0257185865, 0.2450917512, 0.1385163963, -0.1244134754, 0.2277839482, 0.3551834524, 0.0971073508, 0.1650543958, 0.0904947147, 0.4780939221, -0.4379875362, 0.3225880563, -0.3071639538, 0.074354358, 0.0885859057, -0.0340095311, -0.2489498705, 0.208...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Thanks for reporting. Which tokenizers are you using ? What platform are you on ? Can you tell me which version of datasets and pyarrow you're using ? @timothyjlaurent @richarddwang @HuangLianzhe Also if you're able to reproduce the issue on google colab that would be very helpful. I tried to run your code ...
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
64
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4514067769, -0.0183557644, -0.0187174641, 0.2539694309, 0.1591782868, -0.1641506404, 0.2910156548, 0.3014788628, -0.0354783833, 0.1461099088, 0.0482644364, 0.4889172614, -0.4316675663, 0.3358332217, -0.3009808362, 0.0277108401, 0.1179320291, -0.0190120395, -0.2634868026, 0.2...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Hi, Sorry that I forgot to see what my version was. But after updating datasets to master (editable install), and latest pyarrow. It works now ~
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
26
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4262240529, -0.0280846283, -0.0307076462, 0.1988402158, 0.1331439465, -0.137535125, 0.3073056936, 0.333330512, -0.0064562075, 0.1290374249, 0.0286433566, 0.4368829727, -0.4107377827, 0.296774298, -0.2640087903, 0.028000826, 0.1197287589, -0.0393837616, -0.3695648015, 0.22274...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Sorry, I just noticed this. I'm running this on MACOS the version of datasets I'm was 1.0.0 but I've also tried it on 1.0.2. `pyarrow==1.0.1`, Python 3.6 Consider this code: ```python loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py") ds = load_dataset( loader_path, name=...
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
289
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.3902371228, -0.0592107289, -0.0349890366, 0.2178633213, 0.1645163, -0.1262939572, 0.2920174003, 0.2759277821, 0.0126607586, 0.1588910967, 0.0277859829, 0.5151473284, -0.4663680196, 0.1931904107, -0.3737511635, 0.0636925325, 0.1268114597, -0.0707596764, -0.2591290176, 0.23718...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
#659 should fix the `KeyError` issue. It was due to the formatting not getting updated the right way
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
18
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.3512131572, -0.1855429113, -0.0195905492, 0.2103055567, 0.1381860971, -0.1670842767, 0.2795543671, 0.3684957027, 0.0815792978, 0.1086879745, 0.0889101028, 0.3803418577, -0.4003905654, 0.4001257718, -0.3610643446, -0.0263171792, 0.1120805666, -0.011387445, -0.3007358611, 0.29...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Also maybe @n1t0 knows why setting `TOKENIZERS_PARALLELISM=true` creates deadlock issues when calling `map` with multiprocessing ?
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
16
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4357658327, -0.1091971248, -0.0417286903, 0.2121775001, 0.0695554987, -0.1265097558, 0.2546489239, 0.3328147233, 0.1938225776, 0.121943973, 0.1160635054, 0.4567502439, -0.4093025029, 0.2884694934, -0.3752996624, 0.0169865582, 0.0992288142, -0.0942485482, -0.3434426785, 0.248...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
@lhoestq Thanks for taking a look. I pulled the master but I still see the key error. ``` To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) #0: 100%|█████████████████...
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
299
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4438699186, -0.1675046384, -0.0400715694, 0.1716091335, 0.0889460295, -0.1188245937, 0.2078460604, 0.3421574533, 0.1180476174, 0.1576037854, 0.1306533813, 0.4856556952, -0.4362101555, 0.2786430717, -0.3663626611, 0.095104754, 0.111420542, -0.0841654465, -0.2514489889, 0.2458...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf https://github.com/huggingface/tokenizers/issue...
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
75
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4698335826, -0.1674959064, -0.0140369888, 0.1800089478, 0.0854438543, -0.190538466, 0.2459652424, 0.275629282, 0.1016366705, 0.1726740748, 0.0916072503, 0.4495116472, -0.3661405444, 0.2278047055, -0.3540112972, 0.0023671486, 0.1458532661, -0.1064162031, -0.1076291054, 0.2664...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
> Thanks for taking a look. I pulled the master but I still see the key error. I am no longer able to get the error since #659 was merged. Not sure why you still have it @timothyjlaurent Maybe it is a cache issue ? Could you try to use `load_from_cache_file=False` in your `.map()` calls ?
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
56
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4314344227, -0.1244709268, -0.0213453379, 0.2268492579, 0.1299644858, -0.1009751186, 0.2160617113, 0.4109106362, 0.0611688308, 0.1098550633, 0.0194629412, 0.4327994883, -0.3604688048, 0.3772414327, -0.33953017, 0.0885623395, 0.0790839121, -0.0142770419, -0.3960182667, 0.2743...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
> The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf [huggingface/tokenizers#187](https://github.c...
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
140
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4756870866, -0.1844160557, -0.0169961043, 0.204991594, 0.081993036, -0.1816115826, 0.2112885416, 0.2972383797, 0.121988073, 0.1559846997, 0.0783842579, 0.4451449811, -0.386464566, 0.2127694637, -0.3733439445, -0.0014690161, 0.1317390352, -0.1153583825, -0.1595414728, 0.26063...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Hmmm I pulled the latest commit, `b93c5517f70a480533a44e0c42638392fd53d90`, and I'm still seeing both the hanging and the key error.
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
18
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.3769077659, -0.1904279292, -0.0374411158, 0.1203724071, 0.0553162545, -0.1488190591, 0.2511083186, 0.3369264007, 0.1473492533, 0.1732963324, 0.1061455905, 0.4089753628, -0.402579844, 0.3307686746, -0.3633013964, 0.0771412477, 0.113956362, -0.0854944885, -0.2521776855, 0.2545...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Hi @timothyjlaurent The hanging fix just got merged, that why you still had it. For the key error it's possible that the code you ran reused cached datasets from where the KeyError bug was still there. Could you try to clear your cache or make sure that it doesn't reuse cached data with `.map(..., load_from_cac...
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
63
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.3729740381, -0.1859523654, -0.0339066386, 0.129206717, 0.09803541, -0.1325107813, 0.2636091709, 0.4240574241, 0.1391320527, 0.1339373142, 0.04652806, 0.4665991664, -0.369638443, 0.446620822, -0.3270714283, -0.023708228, 0.0842829421, -0.0867141634, -0.2671530545, 0.253703057...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Hi @lhoestq , Thanks for letting me know about the update. So I don't think it's the caching - because hashing mechanism isn't stable for me -- but that's a different issue. In any case I `rm -rf ~/.cache/huggingface` to make a clean slate. I synced with master and I see the key error has gone away, I tried w...
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
174
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.3455712199, -0.1244765222, -0.0099463901, 0.2436584234, 0.1326491386, -0.1085879207, 0.2091186047, 0.3842709661, 0.1275973916, 0.055418361, 0.0650869235, 0.2885190248, -0.4160086215, 0.3402268887, -0.2788424492, 0.0649460182, 0.1176876053, -0.1377788037, -0.373721987, 0.2305...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Thanks for reporting. I'm going to fix that and add a test case so that it doesn't happen again :) I'll let you know when it's done In the meantime if you could make a google colab that reproduces the issue it would be helpful ! @timothyjlaurent
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
47
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4121120572, -0.1406525373, -0.0332483612, 0.2547088265, 0.1527834386, -0.1575673968, 0.3005400002, 0.2842245102, 0.1220513135, 0.1786371619, 0.1057424992, 0.4333915114, -0.4509957433, 0.4070866406, -0.3537719846, 0.0146764461, 0.1161234304, -0.0957585275, -0.3474510312, 0.23...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Thanks @timothyjlaurent ! I just merged a fix on master. I also checked your notebook and it looks like it's working now. I added some tests to make sure it works as expected now :)
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
35
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4193555117, -0.1071407571, -0.0392603241, 0.2147691548, 0.1458775848, -0.1598081589, 0.260438472, 0.3937072754, 0.098954387, 0.1513859779, 0.0195597913, 0.4504708946, -0.363740772, 0.4398679733, -0.319118917, 0.034580674, 0.0604736209, -0.0270153228, -0.390312016, 0.20883171...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
Great, @lhoestq . I'm trying to verify in the colab: changed ``` !pip install datasets ``` to ``` !pip install git+https://github.com/huggingface/datasets@master ``` But I'm still seeing the error - I wonder why?
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
32
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4416672289, -0.1638823748, -0.0333688259, 0.2726550698, 0.1398596764, -0.1488541067, 0.2888736427, 0.2428044379, 0.1233454496, 0.2103492171, 0.0198596995, 0.5049681067, -0.3793999851, 0.3664862812, -0.3114084005, 0.042224139, 0.0823741183, -0.0164949279, -0.3751583993, 0.202...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
It works on my side @timothyjlaurent on google colab. Did you try to uninstall datasets first, before updating it to master's version ?
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
23
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4354047179, -0.021331273, -0.0379032679, 0.2603291869, 0.1033043489, -0.052439712, 0.2046662718, 0.3077458441, 0.0026613548, 0.1889888048, 0.0298017263, 0.4250422716, -0.4757447243, 0.3481318057, -0.3869569302, 0.0821055323, 0.0981037691, -0.0260528252, -0.4865830541, 0.2010...
https://github.com/huggingface/datasets/issues/620
map/filter multiprocessing raises errors and corrupts datasets
I didn't -- it was a new sessions --- buuut - look like it's working today -- woot! I'll close this issue. Thanks @lhoestq
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
24
map/filter multiprocessing raises errors and corrupts datasets After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_d...
[ -0.4389112592, -0.1553173959, -0.0303597879, 0.2238799483, 0.1435325593, -0.1580420583, 0.3082617819, 0.3411380947, 0.1361239702, 0.1270736009, 0.0647981912, 0.3830270767, -0.431142509, 0.3586109281, -0.3158304393, 0.0192880966, 0.1044041067, -0.0822965726, -0.372672081, 0.2437...
https://github.com/huggingface/datasets/issues/619
Mistakes in MLQA features names
Indeed you're right ! Thanks for reporting that Could you open a PR to fix the features names ?
I think the following features in MLQA shouldn't be named the way they are: 1. `questions` (should be `question`) 2. `ids` (should be `id`) 3. `start` (should be `answer_start`) The reasons I'm suggesting these features be renamed are: * To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA et...
19
Mistakes in MLQA features names I think the following features in MLQA shouldn't be named the way they are: 1. `questions` (should be `question`) 2. `ids` (should be `id`) 3. `start` (should be `answer_start`) The reasons I'm suggesting these features be renamed are: * To make them consistent with other QA dat...
[ 0.2796131074, -0.0672782585, -0.0831843615, 0.1901167333, 0.2244755477, 0.3354705274, 0.5049107671, 0.1621007472, -0.2027409226, -0.0173159614, 0.1145084947, 0.1897463799, 0.3583107293, 0.4374579489, -0.0695422962, -0.2454163581, 0.24638547, -0.0470876172, 0.2421890348, -0.0504...
https://github.com/huggingface/datasets/issues/617
Compare different Rouge implementations
Updates - the differences between the following three (1) https://github.com/bheinzerling/pyrouge (previously popular. The one I trust the most) (2) https://github.com/google-research/google-research/tree/master/rouge (3) https://github.com/pltrdy/files2rouge (used in fairseq) can be explained by two things, stemmi...
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Ca...
145
Compare different Rouge implementations I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://ar...
[ -0.0971115455, -0.1454415917, -0.1018338799, 0.244695574, -0.2790500224, -0.6873998046, 0.0966217667, 0.0406913422, -0.2773723304, 0.2449081689, -0.1052664518, 0.1245480105, 0.1565690786, 0.0299381055, 0.1508820653, -0.2753461599, 0.0824456736, 0.0195159987, -0.0070212879, -0.1...
https://github.com/huggingface/datasets/issues/617
Compare different Rouge implementations
This is a real issue, sorry for missing the mention @ibeltagy We implemented a more involved [solution](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L481) that enforces that sentences are split with `\n` so that rougeLsum scores match papers even...
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Ca...
144
Compare different Rouge implementations I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://ar...
[ -0.1379878074, 0.0537788421, -0.0650966018, 0.2345047742, -0.0384499803, -0.6865092516, -0.1374981999, -0.0325632468, -0.2779028416, 0.3952672482, -0.0672829002, 0.1716643721, 0.1589006633, 0.3637019396, 0.2211120278, -0.0784866884, 0.134586975, -0.0168794114, -0.10502626, -0.1...
https://github.com/huggingface/datasets/issues/617
Compare different Rouge implementations
> This is a real issue, sorry for missing the mention @ibeltagy > > We implemented a more involved [solution](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L481) that enforces that sentences are split with `\n` so that rougeLsum scores match paper...
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Ca...
210
Compare different Rouge implementations I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://ar...
[ -0.1199247539, 0.0578207672, -0.0259994734, 0.2475375086, -0.0194166508, -0.7114323378, -0.0715660527, -0.0210266747, -0.241076842, 0.3681191504, -0.0042178426, 0.175264433, 0.1229401007, 0.3598645031, 0.297547847, -0.1921322197, 0.098610498, -0.0162564386, -0.0499817058, -0.09...
https://github.com/huggingface/datasets/issues/617
Compare different Rouge implementations
Hi, thanks for the solution. I am not sure if this is a bug, but on line [510](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L510), are pred, tgt supposed to be swapped?
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Ca...
25
Compare different Rouge implementations I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://ar...
[ -0.0462396555, -0.3402345181, -0.0140984543, 0.3791038692, -0.0787524059, -0.6177636385, 0.0081205843, 0.0337360837, -0.3628863096, 0.3499980271, -0.2657789588, -0.1302829534, 0.1970667243, 0.2637876868, 0.2604076564, -0.2404888868, 0.0870678052, -0.0158298891, -0.3073504269, -...
https://github.com/huggingface/datasets/issues/616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I think the only way to avoid this warning would be to do a copy of the numpy array before providing it. This would slow down a bit the iteration over the dataset but maybe it would be safer. We could disable the copy with a flag on the `set_format` command. In most typical cases of training a NLP model, PyTorch ...
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
106
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be...
[ 0.238243863, -0.3691008389, 0.0651261508, 0.1298751533, 0.4380830824, 0.0913998485, 0.6286587119, 0.2343866974, 0.2976015806, 0.0604874305, -0.1968635321, 0.4237813354, -0.3091079891, -0.2186418474, -0.256572634, -0.207190901, -0.0199279841, 0.1609408259, 0.0264152531, -0.02684...
https://github.com/huggingface/datasets/issues/616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
@thomwolf Would it be possible to have the array look writeable, but raise an error if it is actually written to? I would like to keep my code free of warning, but I also wouldn't like to slow down the program because of unnecessary copy operations.
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
46
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be...
[ 0.238243863, -0.3691008389, 0.0651261508, 0.1298751533, 0.4380830824, 0.0913998485, 0.6286587119, 0.2343866974, 0.2976015806, 0.0604874305, -0.1968635321, 0.4237813354, -0.3091079891, -0.2186418474, -0.256572634, -0.207190901, -0.0199279841, 0.1609408259, 0.0264152531, -0.02684...
https://github.com/huggingface/datasets/issues/616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
Well because I don't know the internal of numpy as well as you I guess hahahah, do you want to try to open a PR proposing a solution?
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
28
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be...
[ 0.238243863, -0.3691008389, 0.0651261508, 0.1298751533, 0.4380830824, 0.0913998485, 0.6286587119, 0.2343866974, 0.2976015806, 0.0604874305, -0.1968635321, 0.4237813354, -0.3091079891, -0.2186418474, -0.256572634, -0.207190901, -0.0199279841, 0.1609408259, 0.0264152531, -0.02684...
https://github.com/huggingface/datasets/issues/616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
@thomwolf @AndreasMadsen I think this is a terrible idea, n/o, and I am very much against it. Modifying internals of an array in such a hacky way is bound to run into other (user) issues down the line. To users it would not be clear at all what is going on e.g. when they check for write access (which will return True) ...
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
155
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be...
[ 0.238243863, -0.3691008389, 0.0651261508, 0.1298751533, 0.4380830824, 0.0913998485, 0.6286587119, 0.2343866974, 0.2976015806, 0.0604874305, -0.1968635321, 0.4237813354, -0.3091079891, -0.2186418474, -0.256572634, -0.207190901, -0.0199279841, 0.1609408259, 0.0264152531, -0.02684...
https://github.com/huggingface/datasets/issues/616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
> To users it would not be clear at all what is going on e.g. when they check for write access (which will return True) but then they get a warning that the array is not writeable. That's extremely confusing. Confusion can be resolved with a helpful error message. In this case, that error message can be controlled b...
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
222
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be...
[ 0.238243863, -0.3691008389, 0.0651261508, 0.1298751533, 0.4380830824, 0.0913998485, 0.6286587119, 0.2343866974, 0.2976015806, 0.0604874305, -0.1968635321, 0.4237813354, -0.3091079891, -0.2186418474, -0.256572634, -0.207190901, -0.0199279841, 0.1609408259, 0.0264152531, -0.02684...
https://github.com/huggingface/datasets/issues/616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
> The right argument here is that if code depends on `.flags.writable` being truthful (not just for warnings), then it will cause unavoidable errors. Although, I can't imagine such a use-case. That's exactly the argument in my first sentence. Too often someone "cannot think of a use-case", but you can not foresee th...
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
198
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be...
[ 0.238243863, -0.3691008389, 0.0651261508, 0.1298751533, 0.4380830824, 0.0913998485, 0.6286587119, 0.2343866974, 0.2976015806, 0.0604874305, -0.1968635321, 0.4237813354, -0.3091079891, -0.2186418474, -0.256572634, -0.207190901, -0.0199279841, 0.1609408259, 0.0264152531, -0.02684...
https://github.com/huggingface/datasets/issues/616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
> But this is not a plain use-case (because Pytorch does not support these read-only tensors). By "plain", I mean the recommended way to use `datasets` with PyTorch according to the `datasets` documentation.
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
33
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be...
[ 0.238243863, -0.3691008389, 0.0651261508, 0.1298751533, 0.4380830824, 0.0913998485, 0.6286587119, 0.2343866974, 0.2976015806, 0.0604874305, -0.1968635321, 0.4237813354, -0.3091079891, -0.2186418474, -0.256572634, -0.207190901, -0.0199279841, 0.1609408259, 0.0264152531, -0.02684...
https://github.com/huggingface/datasets/issues/616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
This error is what I see when I run the first lines of the Pytorch Quickstart. It should also say that it should be ignored and/or how to fix it. BTW, this is a Pytorch error message -- not a Huggingface error message. My code runs anyway.
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
47
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be...
[ 0.238243863, -0.3691008389, 0.0651261508, 0.1298751533, 0.4380830824, 0.0913998485, 0.6286587119, 0.2343866974, 0.2976015806, 0.0604874305, -0.1968635321, 0.4237813354, -0.3091079891, -0.2186418474, -0.256572634, -0.207190901, -0.0199279841, 0.1609408259, 0.0264152531, -0.02684...
https://github.com/huggingface/datasets/issues/615
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
Related: https://issues.apache.org/jira/browse/ARROW-9773 It's definitely a size thing. I took a smaller dataset with 87000 rows and did: ``` for i in range(10,1000,20): table = pa.concat_tables([dset._data]*i) table.take([0]) ``` and it broke at around i=300. Also when `_indices` is not None, this ...
How to reproduce: ```python from datasets import load_dataset wiki = load_dataset("wikipedia", "20200501.en", split="train") wiki[[0]] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-13-38...
108
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0 How to reproduce: ```python from datasets import load_dataset wiki = load_dataset("wikipedia", "20200501.en", split="train") wiki[[0]] --------------------------------------------------------------------------- ArrowIn...
[ -0.3078597486, -0.162481457, -0.0384138711, 0.1917552948, 0.1244792864, -0.0056269197, 0.2420148104, 0.1383451521, -0.3982104659, 0.361923784, 0.3079871535, 0.4601459205, 0.0280527472, 0.0383536592, 0.0241384134, -0.088973321, -0.0426646136, 0.060955327, -0.1887905449, -0.09364...
https://github.com/huggingface/datasets/issues/611
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
``` <class 'pandas.core.frame.DataFrame'> Int64Index: 17136104 entries, 0 to 17136103 Data columns (total 6 columns): # Column Dtype --- ------ ----- 0 item_id int64 1 item_titl object 2 start_price float64 3 shipping_fee float64 4 picture_url object 5...
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb)...
47
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most rece...
[ -0.2813220918, -0.0744088963, -0.2195497453, 0.4159465432, 0.2823321819, -0.0299139991, 0.4081524312, 0.211316064, 0.3901509643, 0.0474444032, 0.0003714644, 0.3082345128, -0.086982362, 0.0685478151, 0.000835738, -0.3419587016, -0.0933910981, 0.227939263, -0.2755077481, 0.093867...