title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Update ASR tags
This PR updates the ASR tags of the 5 datasets added in #2565 following the change of task categories in #2620
https://github.com/huggingface/datasets/pull/2633
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2633", "html_url": "https://github.com/huggingface/datasets/pull/2633", "diff_url": "https://github.com/huggingface/datasets/pull/2633.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2633.patch", "merged_at": "2021-07-13T05:45...
2,633
true
add image-classification task template
Snippet below is the tl;dr, but you can try it out directly here: [![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/005c025d41f0e48ae3d4ee61c0f20b70/image-classification-task-template-demo.ipynb) ```python from datasets import load_datase...
https://github.com/huggingface/datasets/pull/2632
[ "Awesome!", "Thanks for adding a new task template - great work @nateraw πŸš€ !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2632", "html_url": "https://github.com/huggingface/datasets/pull/2632", "diff_url": "https://github.com/huggingface/datasets/pull/2632.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2632.patch", "merged_at": "2021-07-13T15:28...
2,632
true
Delete extracted files when loading dataset
Close #2481, close #2604, close #2591. cc: @stas00, @thomwolf, @BirgerMoell
https://github.com/huggingface/datasets/pull/2631
[ "Sure @stas00, it is still a draft pull request. :)", "Yes, I noticed it after reviewing - my apologies.", "The problem with this approach is that it also deletes the downloaded files (if they need not be extracted). 😟 ", "> The problem with this approach is that it also deletes the downloaded files (if they...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2631", "html_url": "https://github.com/huggingface/datasets/pull/2631", "diff_url": "https://github.com/huggingface/datasets/pull/2631.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2631.patch", "merged_at": "2021-07-19T09:08...
2,631
true
Progress bars are not properly rendered in Jupyter notebook
## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress bars. ## Actual results Simple plane progress bars. cc...
https://github.com/huggingface/datasets/issues/2630
[ "To add my experience when trying to debug this issue:\r\n\r\nSeems like previously the workaround given [here](https://github.com/tqdm/tqdm/issues/485#issuecomment-473338308) worked around this issue. But with the latest version of jupyter/tqdm I still get terminal warnings that IPython tried to send a message fro...
null
2,630
false
Load datasets from the Hub without requiring a dataset script
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `da...
https://github.com/huggingface/datasets/issues/2629
[ "This is so cool, let us know if we can help with anything on the hub side (@Pierrci @elishowk) πŸŽ‰ " ]
null
2,629
false
Use ETag of remote data files
Use ETag of remote data files to create config ID. Related to #2616.
https://github.com/huggingface/datasets/pull/2628
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2628", "html_url": "https://github.com/huggingface/datasets/pull/2628", "diff_url": "https://github.com/huggingface/datasets/pull/2628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2628.patch", "merged_at": "2021-07-12T08:40...
2,628
true
Minor fix tests with Windows paths
Minor fix tests with Windows paths.
https://github.com/huggingface/datasets/pull/2627
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2627", "html_url": "https://github.com/huggingface/datasets/pull/2627", "diff_url": "https://github.com/huggingface/datasets/pull/2627.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2627.patch", "merged_at": "2021-07-12T08:34...
2,627
true
Use correct logger in metrics.py
Fixes #2624
https://github.com/huggingface/datasets/pull/2626
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2626", "html_url": "https://github.com/huggingface/datasets/pull/2626", "diff_url": "https://github.com/huggingface/datasets/pull/2626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2626.patch", "merged_at": "2021-07-12T05:54...
2,626
true
βš›οΈπŸ˜‡βš™οΈπŸ”‘
https://github.com/huggingface/datasets/issues/2625
[]
null
2,625
false
can't set verbosity for `metric.py`
## Describe the bug ``` [2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock [2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingfa...
https://github.com/huggingface/datasets/issues/2624
[ "Thanks @thomas-happify for reporting and thanks @mariosasko for the fix." ]
null
2,624
false
[Metrics] added wiki_split metrics
Fixes: #2606 This pull request adds combine metrics for the wikisplit or English sentence split task Reviewer: @patrickvonplaten
https://github.com/huggingface/datasets/pull/2623
[ "Looks all good to me thanks :)\r\nJust did some minor corrections in the docstring" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2623", "html_url": "https://github.com/huggingface/datasets/pull/2623", "diff_url": "https://github.com/huggingface/datasets/pull/2623.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2623.patch", "merged_at": "2021-07-12T22:34...
2,623
true
Integration with AugLy
**Is your feature request related to a problem? Please describe.** Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text. It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP m...
https://github.com/huggingface/datasets/issues/2622
[ "Hi,\r\n\r\nyou can define your own custom formatting with `Dataset.set_transform()` and then run the tokenizer with the batches of augmented data as follows:\r\n```python\r\ndset = load_dataset(\"imdb\", split=\"train\") # Let's say we are working with the IMDB dataset\r\ndset.set_transform(lambda ex: {\"text\": ...
null
2,622
false
Use prefix to allow exceed Windows MAX_PATH
By using this prefix, you can exceed the Windows MAX_PATH limit. See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces Related to #2524, #2220.
https://github.com/huggingface/datasets/pull/2621
[ "Does this mean the `FileNotFoundError` that avoids infinite loop can be removed?", "Yes, I think so...", "Or maybe we could leave it in case a relative path exceeds the MAX_PATH limit?", " > Or maybe we could leave it in case a relative path exceeds the MAX_PATH limit?\r\n\r\nWhat about converting relative p...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2621", "html_url": "https://github.com/huggingface/datasets/pull/2621", "diff_url": "https://github.com/huggingface/datasets/pull/2621.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2621.patch", "merged_at": "2021-07-16T15:28...
2,621
true
Add speech processing tasks
This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category. The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set.
https://github.com/huggingface/datasets/pull/2620
[ "Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?", "> Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?\r\n\r\nYes there's a few - I'll fix them tomorrow :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2620", "html_url": "https://github.com/huggingface/datasets/pull/2620", "diff_url": "https://github.com/huggingface/datasets/pull/2620.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2620.patch", "merged_at": "2021-07-12T17:32...
2,620
true
Add ASR task for SUPERB
This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition). Usage: ```python from datasets import load_dataset ...
https://github.com/huggingface/datasets/pull/2619
[ "Wait until #2620 is merged before pushing the README tags in this PR", "> Thanks!\r\n> \r\n> One question: aren't you adding `task_templates` to the `_info` method (and to the `dataset_infos.json`?\r\n\r\ngreat catch! i've now added the asr task template (along with a mapping from superb task -> template) and up...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2619", "html_url": "https://github.com/huggingface/datasets/pull/2619", "diff_url": "https://github.com/huggingface/datasets/pull/2619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2619.patch", "merged_at": "2021-07-13T12:40...
2,619
true
`filelock.py` Error
## Describe the bug It seems that the `filelock.py` went error. ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) ...
https://github.com/huggingface/datasets/issues/2618
[ "Hi @liyucheng09, thanks for reporting.\r\n\r\nApparently this issue has to do with your environment setup. One question: is your data in an NFS share? Some people have reported this error when using `fcntl` to write to an NFS share... If this is the case, then it might be that your NFS just may not be set up to pr...
null
2,618
false
Fix missing EOL issue in to_json for old versions of pandas
Some versions of pandas don't add an EOL at the end of the output of `to_json`. Therefore users could end up having two samples in the same line Close https://github.com/huggingface/datasets/issues/2615
https://github.com/huggingface/datasets/pull/2617
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2617", "html_url": "https://github.com/huggingface/datasets/pull/2617", "diff_url": "https://github.com/huggingface/datasets/pull/2617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2617.patch", "merged_at": "2021-07-09T15:28...
2,617
true
Support remote data files
Add support for (streaming) remote data files: ```python data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) ``` cc: @thomwolf
https://github.com/huggingface/datasets/pull/2616
[ "@lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?", "> @lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?\r\n\r\nSure ! We can get the ETag with\r\n```python\r\nheaders = get_authentication_headers_for_url(url, use_a...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2616", "html_url": "https://github.com/huggingface/datasets/pull/2616", "diff_url": "https://github.com/huggingface/datasets/pull/2616.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2616.patch", "merged_at": "2021-07-09T16:13...
2,616
true
Jsonlines export error
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This wha...
https://github.com/huggingface/datasets/issues/2615
[ "Thanks for reporting @TevenLeScao! I'm having a look...", "(not sure what just happened on the assignations sorry)", "For some reason this happens (both `datasets` version are on master) only on Python 3.6 and not Python 3.8.", "@TevenLeScao we are using `pandas` to serialize the dataset to JSON Lines. So it...
null
2,615
false
Convert numpy scalar to python float in Pearsonr output
Following of https://github.com/huggingface/datasets/pull/2612
https://github.com/huggingface/datasets/pull/2614
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2614", "html_url": "https://github.com/huggingface/datasets/pull/2614", "diff_url": "https://github.com/huggingface/datasets/pull/2614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2614.patch", "merged_at": "2021-07-09T14:04...
2,614
true
Use ndarray.item instead of ndarray.tolist
This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works). Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#nump...
https://github.com/huggingface/datasets/pull/2613
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2613", "html_url": "https://github.com/huggingface/datasets/pull/2613", "diff_url": "https://github.com/huggingface/datasets/pull/2613.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2613.patch", "merged_at": "2021-07-09T13:50...
2,613
true
Return Python float instead of numpy.float64 in sklearn metrics
This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`. The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-...
https://github.com/huggingface/datasets/pull/2612
[ "I opened an issue on the `sklearn` repo to understand why `numpy.float64` is the default: https://github.com/scikit-learn/scikit-learn/discussions/20490", "It could be surprising at first to use `tolist()` on numpy scalars but it works ^^", "did the same for Pearsonr here: https://github.com/huggingface/datase...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2612", "html_url": "https://github.com/huggingface/datasets/pull/2612", "diff_url": "https://github.com/huggingface/datasets/pull/2612.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2612.patch", "merged_at": "2021-07-09T13:03...
2,612
true
More consistent naming
As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`πŸ€—Datasets` -> `πŸ€— Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc.
https://github.com/huggingface/datasets/pull/2611
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2611", "html_url": "https://github.com/huggingface/datasets/pull/2611", "diff_url": "https://github.com/huggingface/datasets/pull/2611.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2611.patch", "merged_at": "2021-07-13T16:08...
2,611
true
Add missing WikiANN language tags
Add missing language tags for WikiANN datasets.
https://github.com/huggingface/datasets/pull/2610
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2610", "html_url": "https://github.com/huggingface/datasets/pull/2610", "diff_url": "https://github.com/huggingface/datasets/pull/2610.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2610.patch", "merged_at": "2021-07-08T15:44...
2,610
true
Fix potential DuplicatedKeysError
Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
https://github.com/huggingface/datasets/pull/2609
[ "Finally, I'm splitting this PR." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2609", "html_url": "https://github.com/huggingface/datasets/pull/2609", "diff_url": "https://github.com/huggingface/datasets/pull/2609.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2609.patch", "merged_at": "2021-07-09T16:42...
2,609
true
Support streaming JSON files
Use open in JSON dataset builder, so that it can be patched with xopen for streaming. Close #2607.
https://github.com/huggingface/datasets/pull/2608
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2608", "html_url": "https://github.com/huggingface/datasets/pull/2608", "diff_url": "https://github.com/huggingface/datasets/pull/2608.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2608.patch", "merged_at": "2021-07-08T16:08...
2,608
true
Streaming local gzip compressed JSON line files is not working
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset))...
https://github.com/huggingface/datasets/issues/2607
[ "Updating to pyarrow-4.0.1 didn't fix the issue", "Here is an exemple dataset with 2 of these compressed JSON files: https://huggingface.co/datasets/thomwolf/github-python", "Hi @thomwolf, thanks for reporting.\r\n\r\nIt seems this might be due to the fact that the JSON Dataset builder uses `pyarrow.json` (`paj...
null
2,607
false
[Metrics] addition of wiki_split metrics
**Is your feature request related to a problem? Please describe.** While training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score like this ![image](https://user-images.githubusercontent.com/26653468/124746876-ff5a3380-df3e-11eb-9a01...
https://github.com/huggingface/datasets/issues/2606
[ "#take" ]
null
2,606
false
Make any ClientError trigger retry in streaming mode (e.g. ClientOSError)
During the FLAX sprint some users have this error when streaming datasets: ```python aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer ``` This error must trigger a retry instead of directly crashing Therefore I extended the error type that triggers the retry to be the base aiohttp er...
https://github.com/huggingface/datasets/pull/2605
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2605", "html_url": "https://github.com/huggingface/datasets/pull/2605", "diff_url": "https://github.com/huggingface/datasets/pull/2605.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2605.patch", "merged_at": "2021-07-07T08:59...
2,605
true
Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
https://github.com/huggingface/datasets/issues/2604
[ "Hi !\r\nIf we want something more general, we could either\r\n1. delete the extracted files after the arrow data generation automatically, or \r\n2. delete each extracted file during the arrow generation right after it has been closed.\r\n\r\nSolution 2 is better to save disk space during the arrow generation. Is ...
null
2,604
false
Fix DuplicatedKeysError in omp
Close #2598.
https://github.com/huggingface/datasets/pull/2603
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2603", "html_url": "https://github.com/huggingface/datasets/pull/2603", "diff_url": "https://github.com/huggingface/datasets/pull/2603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2603.patch", "merged_at": "2021-07-07T12:56...
2,603
true
Remove import of transformers
When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers. Related to huggingface/transformers#12549 and #502.
https://github.com/huggingface/datasets/pull/2602
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2602", "html_url": "https://github.com/huggingface/datasets/pull/2602", "diff_url": "https://github.com/huggingface/datasets/pull/2602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2602.patch", "merged_at": "2021-07-07T08:28...
2,602
true
Fix `filter` with multiprocessing in case all samples are discarded
Fixes #2600 Also I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process.
https://github.com/huggingface/datasets/pull/2601
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2601", "html_url": "https://github.com/huggingface/datasets/pull/2601", "diff_url": "https://github.com/huggingface/datasets/pull/2601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2601.patch", "merged_at": "2021-07-07T12:50...
2,601
true
Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded
## Describe the bug If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes. ## Steps to reproduce the bug ```python from datasets import Dataset data = Dataset.from_dict({'id': [0,1]}) dat...
https://github.com/huggingface/datasets/issues/2600
[]
null
2,600
false
Update processing.rst with other export formats
Add other supported export formats than CSV in the docs.
https://github.com/huggingface/datasets/pull/2599
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2599", "html_url": "https://github.com/huggingface/datasets/pull/2599", "diff_url": "https://github.com/huggingface/datasets/pull/2599.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2599.patch", "merged_at": "2021-07-07T08:05...
2,599
true
Unable to download omp dataset
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual r...
https://github.com/huggingface/datasets/issues/2598
[ "Hi @erikadistefano , thanks for reporting the issue.\r\n\r\nI have created a Pull Request that should fix it. \r\n\r\nOnce merged into master, feel free to update your installed `datasets` library (either by installing it from our GitHub master branch or waiting until our next release) to be able to load omp datas...
null
2,598
false
Remove redundant prepare_module
I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`.
https://github.com/huggingface/datasets/pull/2597
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2597", "html_url": "https://github.com/huggingface/datasets/pull/2597", "diff_url": "https://github.com/huggingface/datasets/pull/2597.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2597.patch", "merged_at": "2021-07-07T13:01...
2,597
true
Transformer Class on dataset
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
https://github.com/huggingface/datasets/issues/2596
[ "Hi ! Do you have an example in mind that shows how this could be useful ?", "Example:\n\nMerge 2 datasets into one datasets\n\nLabel extraction from dataset\n\ndataset(text, label)\n β€”> dataset(text, newlabel)\n\nTextCleaning.\n\n\nFor image dataset, \nTransformation are easier (ie linear algebra).\n\n\n\n\n\n...
null
2,596
false
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 from datasets import load_dataset, load_metric 2 ----> 3 common_voice_train = load_da...
https://github.com/huggingface/datasets/issues/2595
[ "Hi @profsatwinder.\r\n\r\nIt looks like you are using an old version of `datasets`. Please update it with `pip install -U datasets` and indicate if the problem persists.", "@albertvillanova Thanks for the information. I updated it to 1.9.0 and the issue is resolved. Thanks again. " ]
null
2,595
false
Fix BibTeX entry
Fix BibTeX entry.
https://github.com/huggingface/datasets/pull/2594
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2594", "html_url": "https://github.com/huggingface/datasets/pull/2594", "diff_url": "https://github.com/huggingface/datasets/pull/2594.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2594.patch", "merged_at": "2021-07-06T04:59...
2,594
true
Support pandas 1.3.0 read_csv
Workaround for this issue in pandas 1.3.0 : https://github.com/pandas-dev/pandas/issues/42387 The csv reader raises an error: ```python /usr/local/lib/python3.7/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on...
https://github.com/huggingface/datasets/pull/2593
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2593", "html_url": "https://github.com/huggingface/datasets/pull/2593", "diff_url": "https://github.com/huggingface/datasets/pull/2593.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2593.patch", "merged_at": "2021-07-05T17:14...
2,593
true
Add c4.noclean infos
Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset
https://github.com/huggingface/datasets/pull/2592
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2592", "html_url": "https://github.com/huggingface/datasets/pull/2592", "diff_url": "https://github.com/huggingface/datasets/pull/2592.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2592.patch", "merged_at": "2021-07-05T13:15...
2,592
true
Cached dataset overflowing disk space
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to b...
https://github.com/huggingface/datasets/issues/2591
[ "Hi! I'm transferring this issue over to `datasets`", "I'm using the datasets concatenate dataset to combine the datasets and then train.\r\ntrain_dataset = concatenate_datasets([dataset1, dataset2, common_voice_train])\r\n\r\n", "Hi @BirgerMoell.\r\n\r\nYou have several options:\r\n- to set caching to be store...
null
2,591
false
Add language tags
This PR adds some missing language tags needed for ASR datasets in #2565
https://github.com/huggingface/datasets/pull/2590
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2590", "html_url": "https://github.com/huggingface/datasets/pull/2590", "diff_url": "https://github.com/huggingface/datasets/pull/2590.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2590.patch", "merged_at": "2021-07-05T10:58...
2,590
true
Support multilabel metrics
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`. This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed. Close #2554.
https://github.com/huggingface/datasets/pull/2589
[ "Hi ! Thanks for the fix :)\r\n\r\nIf I understand correctly, `OptionalSequence` doesn't have an associated arrow type that we know in advance unlike the other feature types, because it depends on the type of the examples.\r\n\r\nFor example, I tested this and it raises an error:\r\n```python\r\nimport datasets as ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2589", "html_url": "https://github.com/huggingface/datasets/pull/2589", "diff_url": "https://github.com/huggingface/datasets/pull/2589.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2589.patch", "merged_at": "2021-07-08T08:40...
2,589
true
Fix test_is_small_dataset
Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests.
https://github.com/huggingface/datasets/pull/2588
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2588", "html_url": "https://github.com/huggingface/datasets/pull/2588", "diff_url": "https://github.com/huggingface/datasets/pull/2588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2588.patch", "merged_at": "2021-07-06T17:09...
2,588
true
Add aiohttp to tests extras require
Currently, none of the streaming tests are runned within our CI test suite, because the streaming tests require aiohttp and this is missing from our tests extras require dependencies. Our CI test suite should be exhaustive and test all the library functionalities.
https://github.com/huggingface/datasets/pull/2587
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2587", "html_url": "https://github.com/huggingface/datasets/pull/2587", "diff_url": "https://github.com/huggingface/datasets/pull/2587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2587.patch", "merged_at": "2021-07-05T09:04...
2,587
true
Fix misalignment in SQuAD
Fix misalignment between: - the answer text and - the answer_start within the context by keeping original leading blank spaces in the context. Fix #2585.
https://github.com/huggingface/datasets/pull/2586
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2586", "html_url": "https://github.com/huggingface/datasets/pull/2586", "diff_url": "https://github.com/huggingface/datasets/pull/2586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2586.patch", "merged_at": "2021-07-07T13:18...
2,586
true
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start']. For example: id = '56d1f453e7d4791d009025bd' answers = {'text': ['P...
https://github.com/huggingface/datasets/issues/2585
[ "Hi @mmajurski, thanks for reporting this issue.\r\n\r\nIndeed this misalignment arises because the source dataset context field contains leading blank spaces (and these are counted within the answer_start), while our datasets loading script removes these leading blank spaces.\r\n\r\nI'm going to fix our script so ...
null
2,585
false
wi_locness: reference latest leaderboard on codalab
The dataset's author asked me to put this codalab link into the dataset's README.
https://github.com/huggingface/datasets/pull/2584
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2584", "html_url": "https://github.com/huggingface/datasets/pull/2584", "diff_url": "https://github.com/huggingface/datasets/pull/2584.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2584.patch", "merged_at": "2021-07-05T09:06...
2,584
true
Error iteration over IterableDataset using Torch DataLoader
## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case wh...
https://github.com/huggingface/datasets/issues/2583
[ "Hi ! This is because you first need to format the dataset for pytorch:\r\n\r\n```python\r\n>>> import torch\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n>>> torch_iterable_dataset = dataset.with_format(\"torch\")\r...
null
2,583
false
Add skip and take
As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets. You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a...
https://github.com/huggingface/datasets/pull/2582
[ "@lhoestq looks good. I tried with https://huggingface.co/datasets/vblagoje/wikipedia_snippets_streamed and it worked nicely. I would add more unit tests for edge cases. What happens if the n is larger than the total number of samples? Just to make sure these cases are handled properly. ", "Yup I'll add the tests...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2582", "html_url": "https://github.com/huggingface/datasets/pull/2582", "diff_url": "https://github.com/huggingface/datasets/pull/2582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2582.patch", "merged_at": "2021-07-05T16:06...
2,582
true
Faster search_batch for ElasticsearchIndex due to threading
Hey, I think it makes sense to perform search_batch threaded, so ES can perform search in parallel. Cheers!
https://github.com/huggingface/datasets/pull/2581
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2581", "html_url": "https://github.com/huggingface/datasets/pull/2581", "diff_url": "https://github.com/huggingface/datasets/pull/2581.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2581.patch", "merged_at": "2021-07-12T09:52...
2,581
true
Fix Counter import
Import from `collections` instead of `typing`.
https://github.com/huggingface/datasets/pull/2580
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2580", "html_url": "https://github.com/huggingface/datasets/pull/2580", "diff_url": "https://github.com/huggingface/datasets/pull/2580.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2580.patch", "merged_at": "2021-07-02T14:37...
2,580
true
Fix BibTeX entry
Add missing contributor to BibTeX entry. cc: @abhishekkrthakur @thomwolf
https://github.com/huggingface/datasets/pull/2579
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2579", "html_url": "https://github.com/huggingface/datasets/pull/2579", "diff_url": "https://github.com/huggingface/datasets/pull/2579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2579.patch", "merged_at": "2021-07-02T07:33...
2,579
true
Support Zstandard compressed files
Close #2572. cc: @thomwolf
https://github.com/huggingface/datasets/pull/2578
[ "> What if people want to run some tests without having zstandard ?\r\n> Usually what we do is add a decorator @require_zstandard for example\r\n\r\n@lhoestq I think I'm missing something here...\r\n\r\nTests are a *development* tool (to ensure we deliver a good quality lib), not something we offer to the end users...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2578", "html_url": "https://github.com/huggingface/datasets/pull/2578", "diff_url": "https://github.com/huggingface/datasets/pull/2578.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2578.patch", "merged_at": "2021-07-05T10:50...
2,578
true
Add mC4
AllenAI is now hosting the processed C4 and mC4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them ! In this PR I added the mC4 dataset builder. It supports 108 languages You can load it with ```python from datasets import load_dataset en_mc4 = load_dataset("mc4", "en") f...
https://github.com/huggingface/datasets/pull/2576
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2576", "html_url": "https://github.com/huggingface/datasets/pull/2576", "diff_url": "https://github.com/huggingface/datasets/pull/2576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2576.patch", "merged_at": "2021-07-02T14:50...
2,576
true
Add C4
The old code for the C4 dataset was to generate the C4 with Apache Beam, as in Tensorflow Datasets. However AllenAI is now hosting the processed C4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them for their amazing work ! In this PR I changed the script to download and prepare ...
https://github.com/huggingface/datasets/pull/2575
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2575", "html_url": "https://github.com/huggingface/datasets/pull/2575", "diff_url": "https://github.com/huggingface/datasets/pull/2575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2575.patch", "merged_at": "2021-07-02T14:50...
2,575
true
Add streaming in load a dataset docs
Mention dataset streaming on the "loading a dataset" page of the documentation
https://github.com/huggingface/datasets/pull/2574
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2574", "html_url": "https://github.com/huggingface/datasets/pull/2574", "diff_url": "https://github.com/huggingface/datasets/pull/2574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2574.patch", "merged_at": "2021-07-01T14:12...
2,574
true
Finding right block-size with JSON loading difficult for user
As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets > json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
https://github.com/huggingface/datasets/issues/2573
[ "This was actually a second error arising from a too small block-size in the json reader.\r\n\r\nFinding the right block size is difficult for the layman user" ]
null
2,573
false
Support Zstandard compressed files
Add support for Zstandard compressed files: https://facebook.github.io/zstd/
https://github.com/huggingface/datasets/issues/2572
[ "I am trying to load a dataset using Hugging Face Datasets load_dataset method. I am getting the value error as show below. Can someone help with this? I am using Windows laptop and Google Colab notebook.\r\n\r\n```\r\n!pip install zstandard\r\nfrom datasets import load_dataset\r\n\r\nlds = load_dataset(\r\n \"j...
null
2,572
false
Filter expected warning log from transformers
Close #2569.
https://github.com/huggingface/datasets/pull/2571
[ "I think the failing test has nothing to do with my PR..." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2571", "html_url": "https://github.com/huggingface/datasets/pull/2571", "diff_url": "https://github.com/huggingface/datasets/pull/2571.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2571.patch", "merged_at": "2021-07-02T04:08...
2,571
true
Minor fix docs format for bertscore
Minor fix docs format for bertscore: - link to README - format of KWARGS_DESCRIPTION
https://github.com/huggingface/datasets/pull/2570
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2570", "html_url": "https://github.com/huggingface/datasets/pull/2570", "diff_url": "https://github.com/huggingface/datasets/pull/2570.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2570.patch", "merged_at": "2021-06-30T15:31...
2,570
true
Weights of model checkpoint not initialized for RobertaModel for Bertscore
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical ...
https://github.com/huggingface/datasets/issues/2569
[ "Hi @suzyahyah, thanks for reporting.\r\n\r\nThe message you get is indeed not an error message, but a warning coming from Hugging Face `transformers`. The complete warning message is:\r\n```\r\nSome weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.wei...
null
2,569
false
Add interleave_datasets for map-style datasets
### Add interleave_datasets for map-style datasets Add support for map-style datasets (i.e. `Dataset` objects) in `interleave_datasets`. It was only supporting iterable datasets (i.e. `IterableDataset` objects). ### Implementation details It works by concatenating the datasets and then re-order the indices to...
https://github.com/huggingface/datasets/pull/2568
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2568", "html_url": "https://github.com/huggingface/datasets/pull/2568", "diff_url": "https://github.com/huggingface/datasets/pull/2568.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2568.patch", "merged_at": "2021-07-01T09:33...
2,568
true
Add ASR task and new languages to resources
This PR adds a new `automatic-speech-recognition` task to the list of supported tasks in `tasks.json` and also includes a few new languages missing from `common_voice`. Note: I used the [Papers with Code list](https://www.paperswithcode.com/area/speech/speech-recognition) as inspiration for the ASR subtasks
https://github.com/huggingface/datasets/pull/2567
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2567", "html_url": "https://github.com/huggingface/datasets/pull/2567", "diff_url": "https://github.com/huggingface/datasets/pull/2567.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2567.patch", "merged_at": "2021-07-01T09:42...
2,567
true
fix Dataset.map when num_procs > num rows
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map...
https://github.com/huggingface/datasets/pull/2566
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2566", "html_url": "https://github.com/huggingface/datasets/pull/2566", "diff_url": "https://github.com/huggingface/datasets/pull/2566.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2566.patch", "merged_at": "2021-07-01T09:11...
2,566
true
Inject templates for ASR datasets
This PR adds ASR templates for 5 of the most common speech datasets on the Hub, where "common" is defined by the number of models trained on them. I also fixed a bunch of the tags in the READMEs 😎
https://github.com/huggingface/datasets/pull/2565
[ "Wait until #2567 is merged so we can benefit from the tagger :)", "thanks for the feedback @lhoestq! i've added the new language codes and this PR should be ready for a merge :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2565", "html_url": "https://github.com/huggingface/datasets/pull/2565", "diff_url": "https://github.com/huggingface/datasets/pull/2565.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2565.patch", "merged_at": "2021-07-05T14:26...
2,565
true
concatenate_datasets for iterable datasets
Currently `concatenate_datasets` only works for map-style `Dataset`. It would be nice to have it work for `IterableDataset` objects as well. It would simply chain the iterables of the iterable datasets.
https://github.com/huggingface/datasets/issues/2564
[ "It is probably worth noting here that the [documentation](https://huggingface.co/docs/datasets/process#concatenate) is misleading (indicating that it does work for IterableDatasets):\r\n\r\n> You can also mix several datasets together by taking alternating examples from each one to create a new dataset. This is kn...
null
2,564
false
interleave_datasets for map-style datasets
Currently the `interleave_datasets` functions only works for `IterableDataset`. Let's make it work for map-style `Dataset` objects as well. It would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user.
https://github.com/huggingface/datasets/issues/2563
[]
null
2,563
false
Minor fix in loading metrics docs
Make some minor fixes in "Loading metrics" docs.
https://github.com/huggingface/datasets/pull/2562
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2562", "html_url": "https://github.com/huggingface/datasets/pull/2562", "diff_url": "https://github.com/huggingface/datasets/pull/2562.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2562.patch", "merged_at": "2021-06-29T17:21...
2,562
true
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce th...
https://github.com/huggingface/datasets/issues/2561
[ "Hi ! I just tried to reproduce what you said:\r\n- create a local builder class\r\n- use `load_dataset`\r\n- update the builder class code\r\n- use `load_dataset` again (with or without `ignore_verifications=True`)\r\nAnd it creates a new cache, as expected.\r\n\r\nWhat modifications did you do to your builder's c...
null
2,561
false
fix Dataset.map when num_procs > num rows
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map...
https://github.com/huggingface/datasets/pull/2560
[ "Hi ! Thanks for fixing this :)\r\n\r\nLooks like you have tons of changes due to code formatting.\r\nWe're using `black` for this, with a custom line length. To run our code formatting, you just need to run\r\n```\r\nmake style\r\n```\r\n\r\nThen for the windows error in the CI, I'm looking into it. It's probably ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2560", "html_url": "https://github.com/huggingface/datasets/pull/2560", "diff_url": "https://github.com/huggingface/datasets/pull/2560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2560.patch", "merged_at": null }
2,560
true
Memory usage consistently increases when processing a dataset with `.map`
## Describe the bug I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch...
https://github.com/huggingface/datasets/issues/2559
[ "Hi ! Can you share the function you pass to `map` ?\r\nI know you mentioned it would be hard to share some code but this would really help to understand what happened" ]
null
2,559
false
Update: WebNLG - update checksums
The master branch changed so I computed the new checksums. I also pinned a specific revision so that it doesn't happen again in the future. Fix https://github.com/huggingface/datasets/issues/2553
https://github.com/huggingface/datasets/pull/2558
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2558", "html_url": "https://github.com/huggingface/datasets/pull/2558", "diff_url": "https://github.com/huggingface/datasets/pull/2558.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2558.patch", "merged_at": "2021-06-28T17:23...
2,558
true
Fix `fever` keys
The keys has duplicates since they were reset to 0 after each file. I fixed it by taking into account the file index as well.
https://github.com/huggingface/datasets/pull/2557
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2557", "html_url": "https://github.com/huggingface/datasets/pull/2557", "diff_url": "https://github.com/huggingface/datasets/pull/2557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2557.patch", "merged_at": "2021-06-28T16:11...
2,557
true
Better DuplicateKeysError error to help the user debug the issue
As mentioned in https://github.com/huggingface/datasets/issues/2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys. The current one is ```python datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys s...
https://github.com/huggingface/datasets/issues/2556
[ "excuse me, my `datasets` version is `2.2.2`, but I also just see the error info like \r\n```\r\nDuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 0\r\nKeys should be unique and deterministic in nature\r\n```", "Hi ! for which dataset do you have this error ?\r\n\r\nAlso note that this is...
null
2,556
false
Fix code_search_net keys
There were duplicate keys in the `code_search_net` dataset, as reported in https://github.com/huggingface/datasets/issues/2552 I fixed the keys (it was an addition of the file and row indices, which was causing collisions) Fix #2552.
https://github.com/huggingface/datasets/pull/2555
[ "Fix #2552." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2555", "html_url": "https://github.com/huggingface/datasets/pull/2555", "diff_url": "https://github.com/huggingface/datasets/pull/2555.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2555.patch", "merged_at": "2021-06-28T14:10...
2,555
true
Multilabel metrics not supported
When I try to use a metric like F1 macro I get the following error: ``` TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' ``` There is an explicit casting here: https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L...
https://github.com/huggingface/datasets/issues/2554
[ "Hi @GuillemGSubies, thanks for reporting.\r\n\r\nI have made a PR to fix this issue and allow metrics to be computed also for multilabel classification problems.", "Looks nice, thank you very much! πŸš€ ", "Sorry for reopening but I just noticed that the `_compute` method for the F1 metric is still not good enou...
null
2,554
false
load_dataset("web_nlg") NonMatchingChecksumError
Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev") ``` Gives ``` NonMatchingChecksumError: Checksums didn't match for dataset source files: ['h...
https://github.com/huggingface/datasets/issues/2553
[ "Hi ! Thanks for reporting. This is due to the WebNLG repository that got updated today.\r\nI just pushed a fix at #2558 - this shouldn't happen anymore in the future.", "This is fixed on `master` now :)\r\nWe'll do a new release soon !" ]
null
2,553
false
Keys should be unique error on code_search_net
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
https://github.com/huggingface/datasets/issues/2552
[ "Two questions:\r\n- with `datasets-cli env` we don't have any information on the dataset script version used. Should we give access to this somehow? Either as a note in the Error message or as an argument with the name of the dataset to `datasets-cli env`?\r\n- I don't really understand why the id is duplicated in...
null
2,552
false
Fix FileSystems documentation
### What this fixes: This PR resolves several issues I discovered in the documentation on the `datasets.filesystems` module ([this page](https://huggingface.co/docs/datasets/filesystems.html)). ### What were the issues? When I originally tried implementing the code examples I faced several bugs attributed to: -...
https://github.com/huggingface/datasets/pull/2551
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2551", "html_url": "https://github.com/huggingface/datasets/pull/2551", "diff_url": "https://github.com/huggingface/datasets/pull/2551.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2551.patch", "merged_at": "2021-06-28T13:09...
2,551
true
Allow for incremental cumulative metric updates in a distributed setup
Currently, using a metric allows for one of the following: - Per example/batch metrics - Cumulative metrics over the whole data What I'd like is to have an efficient way to get cumulative metrics over the examples/batches added so far, in order to display it as part of the progress bar during training/evaluation. ...
https://github.com/huggingface/datasets/issues/2550
[]
null
2,550
false
Handling unlabeled datasets
Hi! Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable). For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"...
https://github.com/huggingface/datasets/issues/2549
[ "Hi @nelson-liu,\r\n\r\nYou can pass the parameter `features` to `load_dataset`: https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset\r\n\r\nIf you look at the code of the MNLI script you referred in your question (https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi...
null
2,549
false
Field order issue in loading json
## Describe the bug The `load_dataset` function expects columns in alphabetical order when loading json files. Similar bug was previously reported for csv in #623 and fixed in #684. ## Steps to reproduce the bug For a json file `j.json`, ``` {"c":321, "a": 1, "b": 2} ``` Running the following, ``` f= data...
https://github.com/huggingface/datasets/issues/2548
[ "Hi @luyug, thanks for reporting.\r\n\r\nThe good news is that we fixed this issue only 9 days ago: #2507.\r\n\r\nThe patch is already in the master branch of our repository and it will be included in our next `datasets` release version 1.9.0.\r\n\r\nFeel free to reopen the issue if the problem persists." ]
null
2,548
false
Dataset load_from_disk is too slow
@lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in t...
https://github.com/huggingface/datasets/issues/2547
[ "Hi ! It looks like an issue with the virtual disk you are using.\r\n\r\nWe load datasets using memory mapping. In general it makes it possible to load very big files instantaneously since it doesn't have to read the file (it just assigns virtual memory to the file on disk).\r\nHowever there happens to be issues wi...
null
2,547
false
Add license to the Cambridge English Write & Improve + LOCNESS dataset card
As noticed in https://github.com/huggingface/datasets/pull/2539, the licensing information was missing for this dataset. I added it and I also filled a few other empty sections.
https://github.com/huggingface/datasets/pull/2546
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2546", "html_url": "https://github.com/huggingface/datasets/pull/2546", "diff_url": "https://github.com/huggingface/datasets/pull/2546.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2546.patch", "merged_at": "2021-06-24T10:52...
2,546
true
Fix DuplicatedKeysError in drop dataset
Close #2542. cc: @VictorSanh.
https://github.com/huggingface/datasets/pull/2545
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2545", "html_url": "https://github.com/huggingface/datasets/pull/2545", "diff_url": "https://github.com/huggingface/datasets/pull/2545.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2545.patch", "merged_at": "2021-06-24T14:57...
2,545
true
Fix logging levels
Sometimes default `datasets` logging can be too verbose. One approach could be reducing some logging levels, from info to debug, or from warning to info. Close #2543. cc: @stas00
https://github.com/huggingface/datasets/pull/2544
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2544", "html_url": "https://github.com/huggingface/datasets/pull/2544", "diff_url": "https://github.com/huggingface/datasets/pull/2544.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2544.patch", "merged_at": "2021-06-25T13:40...
2,544
true
switching some low-level log.info's to log.debug?
In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components. The trouble is that now we get a ton of these: ``` 06/23/2021 12:15:31 - INFO - da...
https://github.com/huggingface/datasets/issues/2543
[ "Hi @stas00, thanks for pointing out this issue with logging.\r\n\r\nI agree that `datasets` can sometimes be too verbose... I can create a PR and we could discuss there the choice of the log levels for different parts of the code." ]
null
2,543
false
`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA`
## Describe the bug Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("drop") load_dataset("adversarial_qa", "adversarialQA") ``` ## Expected results Th...
https://github.com/huggingface/datasets/issues/2542
[ "very much related: https://github.com/huggingface/datasets/pull/2333", "Hi @VictorSanh, thank you for reporting this issue with duplicated keys.\r\n\r\n- The issue with \"adversarial_qa\" was fixed 23 days ago: #2433. Current version of `datasets` (1.8.0) includes the patch.\r\n- I am investigating the issue wit...
null
2,542
false
update discofuse link cc @ekQ
Updating the discofuse link: https://github.com/google-research-datasets/discofuse/commit/fd4b120cb3dd19a417e7f3b5432010b574b5eeee
https://github.com/huggingface/datasets/pull/2541
[ "The CI is failing because the dataset tags for `discofuse` are missing. I'm merging this PR since this is unrelated to this PR, but feel free to open another PR to add the tags here if you have some time:\r\n\r\nhttps://github.com/huggingface/datasets/blob/19408f9fab85c79b966085574cd2da3b90959179/datasets/discofus...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2541", "html_url": "https://github.com/huggingface/datasets/pull/2541", "diff_url": "https://github.com/huggingface/datasets/pull/2541.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2541.patch", "merged_at": "2021-06-28T14:34...
2,541
true
Remove task templates if required features are removed during `Dataset.map`
This PR fixes a bug reported by @craffel where removing a dataset's columns during `Dataset.map` triggered a `KeyError` because the `TextClassification` template tried to access the removed columns during `DatasetInfo.__post_init__`: ```python from datasets import load_dataset # `yelp_polarity` comes with a `Tex...
https://github.com/huggingface/datasets/pull/2540
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2540", "html_url": "https://github.com/huggingface/datasets/pull/2540", "diff_url": "https://github.com/huggingface/datasets/pull/2540.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2540.patch", "merged_at": "2021-06-24T13:34...
2,540
true
remove wi_locness dataset due to licensing issues
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
https://github.com/huggingface/datasets/pull/2539
[ "Hi ! I'm sorry to hear that.\r\nThough we are not redistributing the dataset, we just provide a python script that downloads and process the dataset from its original source hosted at https://www.cl.cam.ac.uk\r\n\r\nTherefore I'm not sure what's the issue with licensing. What do you mean exactly ?", "I think tha...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2539", "html_url": "https://github.com/huggingface/datasets/pull/2539", "diff_url": "https://github.com/huggingface/datasets/pull/2539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2539.patch", "merged_at": null }
2,539
true
Loading partial dataset when debugging
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues. Is there a wa...
https://github.com/huggingface/datasets/issues/2538
[ "Hi ! `load_dataset` downloads the full dataset once and caches it, so that subsequent calls to `load_dataset` just reloads the dataset from your disk.\r\nThen when you specify a `split` in `load_dataset`, it will just load the requested split from the disk. If your specified split is a sliced split (e.g. `\"train[...
null
2,538
false
Add Parquet loader + from_parquet and to_parquet
Continuation of #2247 I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`. As usual, the data are converted to arrow in a batched way to avoid loading everything in memory.
https://github.com/huggingface/datasets/pull/2537
[ "`pyarrow` 1.0.0 doesn't support some types in parquet, we'll have to bump its minimum version.\r\n\r\nAlso I still need to add dummy data to test the parquet builder.", "I had to bump the minimum pyarrow version to 3.0.0 to properly support parquet.\r\n\r\nEverything is ready for review now :)\r\nI reused pretty...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2537", "html_url": "https://github.com/huggingface/datasets/pull/2537", "diff_url": "https://github.com/huggingface/datasets/pull/2537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2537.patch", "merged_at": "2021-06-30T16:30...
2,537
true
Use `Audio` features for `AutomaticSpeechRecognition` task template
In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'. The solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in ...
https://github.com/huggingface/datasets/issues/2536
[ "I'm just retaking and working on #2324. πŸ˜‰ ", "Resolved via https://github.com/huggingface/datasets/pull/4006." ]
null
2,536
false
Improve Features docs
- Fix rendering and cross-references in Features docs - Add docstrings to Features methods
https://github.com/huggingface/datasets/pull/2535
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2535", "html_url": "https://github.com/huggingface/datasets/pull/2535", "diff_url": "https://github.com/huggingface/datasets/pull/2535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2535.patch", "merged_at": "2021-06-23T13:40...
2,535
true
Sync with transformers disabling NOTSET
Close #2528.
https://github.com/huggingface/datasets/pull/2534
[ "Nice thanks ! I think there are other places with\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nCould you replace them as well ?", "Sure @lhoestq! I was not sure if this change should only be circumscribed to `http_get`..." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2534", "html_url": "https://github.com/huggingface/datasets/pull/2534", "diff_url": "https://github.com/huggingface/datasets/pull/2534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2534.patch", "merged_at": "2021-06-24T14:42...
2,534
true
Add task template for automatic speech recognition
This PR adds a task template for automatic speech recognition. In this task, the input is a path to an audio file which the model consumes to produce a transcription. Usage: ```python from datasets import load_dataset from datasets.tasks import AutomaticSpeechRecognition ds = load_dataset("timit_asr", split=...
https://github.com/huggingface/datasets/pull/2533
[ "@SBrandeis @lhoestq i've integrated your suggestions, so this is ready for another review :)", "Merging if it's good for you @lewtun :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2533", "html_url": "https://github.com/huggingface/datasets/pull/2533", "diff_url": "https://github.com/huggingface/datasets/pull/2533.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2533.patch", "merged_at": "2021-06-23T15:56...
2,533
true