title
stringlengths
1
290
body
stringlengths
0
228k
โŒ€
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner). The pipeline works fine with most instance i...
https://github.com/huggingface/datasets/issues/2532
[ "Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?", "> Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?\r\n\r\nOh, I am sorry\r\nI would reopen the post on huggingface/transformers" ]
null
2,532
false
Fix dev version
The dev version that ends in `.dev0` should be greater than the current version. However it happens that `1.8.0 > 1.8.0.dev0` for example. Therefore we need to use `1.8.1.dev0` for example in this case. I updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps a...
https://github.com/huggingface/datasets/pull/2531
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2531", "html_url": "https://github.com/huggingface/datasets/pull/2531", "diff_url": "https://github.com/huggingface/datasets/pull/2531.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2531.patch", "merged_at": "2021-06-22T09:47...
2,531
true
Fixed label parsing in the ProductReviews dataset
Fixed issue with parsing dataset labels.
https://github.com/huggingface/datasets/pull/2530
[ "@lhoestq, can you please review this PR?\r\nWhat exactly is the problem in the test case? Should it matter?", "Hi ! Thanks for fixing this :)\r\n\r\nThe CI fails for two reasons:\r\n- the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2530", "html_url": "https://github.com/huggingface/datasets/pull/2530", "diff_url": "https://github.com/huggingface/datasets/pull/2530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2530.patch", "merged_at": "2021-06-22T12:52...
2,530
true
Add summarization template
This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template. Usage: ```python from datasets import load_dataset from datasets.tasks import Summarization ds = load_dataset(...
https://github.com/huggingface/datasets/pull/2529
[ "> Nice thanks !\r\n> Could you just move the test outside of the BaseDatasetTest class please ? Otherwise it will unnecessarily be run twice.\r\n\r\nsure, on it! thanks for the explanations about the `self._to` method :)", "@lhoestq i've moved all the task template tests outside of `BaseDatasetTest` and collecte...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2529", "html_url": "https://github.com/huggingface/datasets/pull/2529", "diff_url": "https://github.com/huggingface/datasets/pull/2529.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2529.patch", "merged_at": "2021-06-23T13:30...
2,529
true
Logging cannot be set to NOTSET similar to transformers
## Describe the bug In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b5...
https://github.com/huggingface/datasets/issues/2528
[ "Hi @joshzwiebel, thanks for reporting. We are going to align with `transformers`." ]
null
2,528
false
Replace bad `n>1M` size tag
Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M<n<10M`, etc. This resulted in unexpected results when searching for datasets bigger than 1M on the hub, since it was only showing the ones with the tag `n>1M`.
https://github.com/huggingface/datasets/pull/2527
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2527", "html_url": "https://github.com/huggingface/datasets/pull/2527", "diff_url": "https://github.com/huggingface/datasets/pull/2527.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2527.patch", "merged_at": "2021-06-21T15:06...
2,527
true
Add COCO datasets
## Adding a Dataset - **Name:** COCO - **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset. - **Paper + website:** https://cocodataset.org/#home - **Data:** https://cocodataset.org/#download - **Motivation:** It would be great to have COCO available in HuggingFace datasets...
https://github.com/huggingface/datasets/issues/2526
[ "I'm currently adding it, the entire dataset is quite big around 30 GB so I add splits separately. You can take a look here https://huggingface.co/datasets/merve/coco", "I talked to @lhoestq and it's best if I download this dataset through TensorFlow datasets instead, so I'll be implementing that one really soon....
null
2,526
false
Use scikit-learn package rather than sklearn in setup.py
The sklearn package is an historical thing and should probably not be used by anyone, see https://github.com/scikit-learn/scikit-learn/issues/8215#issuecomment-344679114 for some caveats. Note: this affects only TESTS_REQUIRE so I guess only developers not end users.
https://github.com/huggingface/datasets/pull/2525
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2525", "html_url": "https://github.com/huggingface/datasets/pull/2525", "diff_url": "https://github.com/huggingface/datasets/pull/2525.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2525.patch", "merged_at": "2021-06-21T08:57...
2,525
true
Raise FileNotFoundError in WindowsFileLock
Closes #2443
https://github.com/huggingface/datasets/pull/2524
[ "Hi ! Could you clarify what it fixes exactly and give more details please ? Especially why this is related to the windows hanging error ?", "This has already been merged, but I'll clarify the idea of this PR. Before this merge, FileLock was the only component affected by the max path limit on Windows (that came ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2524", "html_url": "https://github.com/huggingface/datasets/pull/2524", "diff_url": "https://github.com/huggingface/datasets/pull/2524.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2524.patch", "merged_at": "2021-06-28T08:47...
2,524
true
Fr
__Originally posted by @lewtun in https://github.com/huggingface/datasets/pull/2469__
https://github.com/huggingface/datasets/issues/2523
[]
null
2,523
false
Documentation Mistakes in Dataset: emotion
As per documentation, Dataset: emotion Homepage: https://github.com/dair-ai/emotion_dataset Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion Emotion is a dataset of English Twitter messages with eight b...
https://github.com/huggingface/datasets/issues/2522
[ "Hi,\r\n\r\nthis issue has been already reported in the dataset repo (https://github.com/dair-ai/emotion_dataset/issues/2), so this is a bug on their side.", "The documentation has another bug in the dataset card [here](https://huggingface.co/datasets/emotion). \r\n\r\nIn the dataset summary **six** emotions are ...
null
2,522
false
Insert text classification template for Emotion dataset
This PR includes a template and updated `dataset_infos.json` for the `emotion` dataset.
https://github.com/huggingface/datasets/pull/2521
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2521", "html_url": "https://github.com/huggingface/datasets/pull/2521", "diff_url": "https://github.com/huggingface/datasets/pull/2521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2521.patch", "merged_at": "2021-06-21T09:22...
2,521
true
Datasets with tricky task templates
I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for. ## Text classification * [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` ...
https://github.com/huggingface/datasets/issues/2520
[]
null
2,520
false
Improve performance of pandas arrow extractor
While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster.
https://github.com/huggingface/datasets/pull/2519
[ "Looks like this change\r\n```\r\npa_table[pa_table.column_names[0]].to_pandas(types_mapper=pandas_types_mapper)\r\n```\r\ndoesn't return a Series with the correct type.\r\nThis is related to https://issues.apache.org/jira/browse/ARROW-9664\r\n\r\nSince the types_mapper isn't taken into account, the ArrayXD types a...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2519", "html_url": "https://github.com/huggingface/datasets/pull/2519", "diff_url": "https://github.com/huggingface/datasets/pull/2519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2519.patch", "merged_at": "2021-06-21T09:06...
2,519
true
Add task templates for tydiqa and xquad
This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub. Notes: * I could not test the tydiqa implementation since I don't have enough disk space ๐Ÿ˜ข . But I am confident the template works :) * there exist other datasets like `fquad` and `mlqa` which are candida...
https://github.com/huggingface/datasets/pull/2518
[ "Just tested TydiQA and it works fine :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2518", "html_url": "https://github.com/huggingface/datasets/pull/2518", "diff_url": "https://github.com/huggingface/datasets/pull/2518.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2518.patch", "merged_at": "2021-06-18T14:50...
2,518
true
Fix typo in MatthewsCorrelation class name
Close #2513.
https://github.com/huggingface/datasets/pull/2517
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2517", "html_url": "https://github.com/huggingface/datasets/pull/2517", "diff_url": "https://github.com/huggingface/datasets/pull/2517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2517.patch", "merged_at": "2021-06-18T08:43...
2,517
true
datasets.map pickle issue resulting in invalid mapping function
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m...
https://github.com/huggingface/datasets/issues/2516
[ "Hi ! `map` calls `__getstate__` using `dill` to hash your map function. This is used by the caching mechanism to recover previously computed results. That's why you don't see any `__setstate__` call.\r\n\r\nWhy do you change an attribute of your tokenizer when `__getstate__` is called ?", "@lhoestq because if I ...
null
2,516
false
CRD3 dataset card
This PR adds additional information to the CRD3 dataset card.
https://github.com/huggingface/datasets/pull/2515
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2515", "html_url": "https://github.com/huggingface/datasets/pull/2515", "diff_url": "https://github.com/huggingface/datasets/pull/2515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2515.patch", "merged_at": "2021-06-21T10:18...
2,515
true
Can datasets remove duplicated rows?
**Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. **Describe the solution you'd like*...
https://github.com/huggingface/datasets/issues/2514
[ "Hi ! For now this is probably the best option.\r\nWe might add a feature like this in the feature as well.\r\n\r\nDo you know any deduplication method that works on arbitrary big datasets without filling up RAM ?\r\nOtherwise we can have do the deduplication in memory like pandas but I feel like this is going to b...
null
2,514
false
Corelation should be Correlation
https://github.com/huggingface/datasets/blob/0e87e1d053220e8ecddfa679bcd89a4c7bc5af62/metrics/matthews_correlation/matthews_correlation.py#L66
https://github.com/huggingface/datasets/issues/2513
[ "Hi @colbym-MM, thanks for reporting. We are fixing it." ]
null
2,513
false
seqeval metric does not work with a recent version of sklearn: classification_report() got an unexpected keyword argument 'output_dict'
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric seqeval = load_metric("seqeval") seqeval.compute(predictions=[['A']], references=[['A']]) ``` ## Expected results The function computes a dict with ...
https://github.com/huggingface/datasets/issues/2512
[ "Sorry, I was using an old version of sequeval" ]
null
2,512
false
Add C4
## Adding a Dataset - **Name:** *C4* - **Description:** *https://github.com/allenai/allennlp/discussions/5056* - **Paper:** *https://arxiv.org/abs/1910.10683* - **Data:** *https://huggingface.co/datasets/allenai/c4* - **Motivation:** *Used a lot for pretraining* Instructions to add a new dataset can be found [h...
https://github.com/huggingface/datasets/issues/2511
[ "Update on this: I'm computing the checksums of the data files. It will be available soon", "Added in #2575 :)" ]
null
2,511
false
Add align_labels_with_mapping to DatasetDict
https://github.com/huggingface/datasets/pull/2457 added the `Dataset.align_labels_with_mapping` method. In this PR I also added `DatasetDict.align_labels_with_mapping`
https://github.com/huggingface/datasets/pull/2510
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2510", "html_url": "https://github.com/huggingface/datasets/pull/2510", "diff_url": "https://github.com/huggingface/datasets/pull/2510.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2510.patch", "merged_at": "2021-06-17T10:45...
2,510
true
Fix fingerprint when moving cache dir
The fingerprint of a dataset changes if the cache directory is moved. I fixed that by setting the fingerprint to be the hash of: - the relative cache dir (dataset_name/version/config_id) - the requested split Close #2496 I had to fix an issue with the filelock filename that was too long (>255). It prevented t...
https://github.com/huggingface/datasets/pull/2509
[ "Windows, why are you doing this to me ?", "Thanks @lhoestq, I'm starting reviewing this PR.", "Yea issues on windows are about long paths, not long filenames.\r\nWe can make sure the lock filenames are not too long, but not for the paths", "Took your suggestions into account @albertvillanova :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2509", "html_url": "https://github.com/huggingface/datasets/pull/2509", "diff_url": "https://github.com/huggingface/datasets/pull/2509.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2509.patch", "merged_at": "2021-06-21T15:05...
2,509
true
Load Image Classification Dataset from Local
**Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of each class in each folder, the ability to load th...
https://github.com/huggingface/datasets/issues/2508
[ "Hi ! Is this folder structure a standard, a bit like imagenet ?\r\nIn this case maybe we can consider having a dataset loader for cifar-like, imagenet-like, squad-like, conll-like etc. datasets ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nmy_custom_cifar = load_dataset(\"cifar_like\", data_dir=\"path...
null
2,508
false
Rearrange JSON field names to match passed features schema field names
This PR depends on PR #2453 (which must be merged first). Close #2366.
https://github.com/huggingface/datasets/pull/2507
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2507", "html_url": "https://github.com/huggingface/datasets/pull/2507", "diff_url": "https://github.com/huggingface/datasets/pull/2507.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2507.patch", "merged_at": "2021-06-16T10:47...
2,507
true
Add course banner
This PR adds a course banner similar to the one you can now see in the [Transformers repo](https://github.com/huggingface/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too.
https://github.com/huggingface/datasets/pull/2506
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2506", "html_url": "https://github.com/huggingface/datasets/pull/2506", "diff_url": "https://github.com/huggingface/datasets/pull/2506.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2506.patch", "merged_at": "2021-06-15T16:25...
2,506
true
Make numpy arrow extractor faster
I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498 This could make the numpy/torch/tf/jax formatting faster
https://github.com/huggingface/datasets/pull/2505
[ "Looks like we have a nice speed up in some benchmarks. For example:\r\n- `read_formatted numpy 5000`: 4.584777 sec -> 0.487113 sec\r\n- `read_formatted torch 5000`: 4.565676 sec -> 1.289514 sec", "Can we convert this draft to PR @lhoestq ?", "Ready for review ! cc @vblagoje", "@lhoestq I tried the branch a...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2505", "html_url": "https://github.com/huggingface/datasets/pull/2505", "diff_url": "https://github.com/huggingface/datasets/pull/2505.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2505.patch", "merged_at": "2021-06-28T09:53...
2,505
true
SubjQA wrong boolean values in entries
## Describe the bug SubjQA seems to have a boolean that's consistently wrong. It defines: - question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective). - is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are...
https://github.com/huggingface/datasets/issues/2503
[ "Hi @arnaudstiegler, thanks for reporting. I'm investigating it.", "@arnaudstiegler I have just checked that these mismatches are already present in the original dataset: https://github.com/megagonlabs/SubjQA\r\n\r\nWe are going to contact the dataset owners to report this.", "I have:\r\n- opened an issue in th...
null
2,503
false
JAX integration
Hi ! I just added the "jax" formatting, as we already have for pytorch, tensorflow, numpy (and also pandas and arrow). It does pretty much the same thing as the pytorch formatter except it creates jax.numpy.ndarray objects. ```python from datasets import Dataset d = Dataset.from_dict({"foo": [[0., 1., 2.]]})...
https://github.com/huggingface/datasets/pull/2502
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2502", "html_url": "https://github.com/huggingface/datasets/pull/2502", "diff_url": "https://github.com/huggingface/datasets/pull/2502.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2502.patch", "merged_at": "2021-06-21T16:15...
2,502
true
Add Zenodo metadata file with license
This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `"Apache-2.0"`, which otherwise by default is `"other-open"`. Close #2472.
https://github.com/huggingface/datasets/pull/2501
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2501", "html_url": "https://github.com/huggingface/datasets/pull/2501", "diff_url": "https://github.com/huggingface/datasets/pull/2501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2501.patch", "merged_at": "2021-06-14T16:49...
2,501
true
Add load_dataset_builder
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself. TODOs: - [x] Add docstring and entry in the docs - [x] Add tests Closes #2484
https://github.com/huggingface/datasets/pull/2500
[ "Hi @mariosasko, thanks for taking on this issue.\r\n\r\nJust a few logistic suggestions, as you are one of our most active contributors โค๏ธ :\r\n- When you start working on an issue, you can self-assign it to you by commenting on the issue page with the keyword: `#self-assign`; we have implemented a GitHub Action t...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2500", "html_url": "https://github.com/huggingface/datasets/pull/2500", "diff_url": "https://github.com/huggingface/datasets/pull/2500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2500.patch", "merged_at": "2021-07-05T10:45...
2,500
true
Python Programming Puzzles
## Adding a Dataset - **Name:** Python Programming Puzzles - **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis - **Paper:** https://arxiv.org/pdf/2106.05784.pdf - **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scro...
https://github.com/huggingface/datasets/issues/2499
[ "๐Ÿ‘€ @TalSchuster", "Thanks @VictorSanh!\r\nThere's also a [notebook](https://aka.ms/python_puzzles) and [demo](https://aka.ms/python_puzzles_study) available now to try out some of the puzzles" ]
null
2,499
false
Improve torch formatting performance
**Is your feature request related to a problem? Please describe.** It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an...
https://github.com/huggingface/datasets/issues/2498
[ "Thatโ€™s interesting thanks, letโ€™s see what we can do. Can you detail your last sentence? Iโ€™m not sure I understand it well.", "Hi ! I just re-ran a quick benchmark and using `to_numpy()` seems to be faster now:\r\n\r\n```python\r\nimport pyarrow as pa # I used pyarrow 3.0.0\r\nimport numpy as np\r\n\r\nn, max_le...
null
2,498
false
Use default cast for sliced list arrays if pyarrow >= 4
From pyarrow version 4, it is supported to cast sliced lists. This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4. In relation with PR #2461 and #2490. cc: @lhoestq, @abhi1thakur, @SBrandeis
https://github.com/huggingface/datasets/pull/2497
[ "I believe we don't use PyArrow >= 4.0.0 because of some segfault issues:\r\nhttps://github.com/huggingface/datasets/blob/1206ffbcd42dda415f6bfb3d5040708f50413c93/setup.py#L78\r\nCan you confirm @lhoestq ?", "@SBrandeis pyarrow version 4.0.1 has fixed that issue: #2489 ๐Ÿ˜‰ " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2497", "html_url": "https://github.com/huggingface/datasets/pull/2497", "diff_url": "https://github.com/huggingface/datasets/pull/2497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2497.patch", "merged_at": "2021-06-14T14:24...
2,497
true
Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map`
`Dataset.map` uses the dataset fingerprint (a hash) for caching. However the fingerprint seems to change when someone moves the cache directory of the dataset. This is because it uses the default fingerprint generation: 1. the dataset path is used to get the fingerprint 2. the modification times of the arrow file...
https://github.com/huggingface/datasets/issues/2496
[]
null
2,496
false
JAX formatting
We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well
https://github.com/huggingface/datasets/issues/2495
[]
null
2,495
false
Improve docs on Enhancing performance
In the ["Enhancing performance"](https://huggingface.co/docs/datasets/loading_datasets.html#enhancing-performance) section of docs, add specific use cases: - How to make datasets the fastest - How to make datasets take the less RAM - How to make datasets take the less hard drive mem cc: @thomwolf
https://github.com/huggingface/datasets/issues/2494
[]
null
2,494
false
add tensorflow-macos support
ref - https://github.com/huggingface/datasets/issues/2068
https://github.com/huggingface/datasets/pull/2493
[ "@albertvillanova done!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2493", "html_url": "https://github.com/huggingface/datasets/pull/2493", "diff_url": "https://github.com/huggingface/datasets/pull/2493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2493.patch", "merged_at": "2021-06-15T08:53...
2,493
true
Eduge
Hi, awesome folks behind the huggingface! Here is my PR for the text classification dataset in Mongolian. Please do let me know in case you have anything to clarify. Thanks & Regards, Enod
https://github.com/huggingface/datasets/pull/2492
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2492", "html_url": "https://github.com/huggingface/datasets/pull/2492", "diff_url": "https://github.com/huggingface/datasets/pull/2492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2492.patch", "merged_at": "2021-06-16T10:41...
2,492
true
add eduge classification dataset
https://github.com/huggingface/datasets/pull/2491
[ "Closing this PR as I'll submit a new one - bug free" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2491", "html_url": "https://github.com/huggingface/datasets/pull/2491", "diff_url": "https://github.com/huggingface/datasets/pull/2491.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2491.patch", "merged_at": null }
2,491
true
Allow latest pyarrow version
Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0. Close #2489.
https://github.com/huggingface/datasets/pull/2490
[ "i need some help with this" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2490", "html_url": "https://github.com/huggingface/datasets/pull/2490", "diff_url": "https://github.com/huggingface/datasets/pull/2490.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2490.patch", "merged_at": "2021-06-14T07:53...
2,490
true
Allow latest pyarrow version once segfault bug is fixed
As pointed out by @symeneses (see https://github.com/huggingface/datasets/pull/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https://issues.apache.org/jira/browse/ARROW-12568): - it was fixed on 3 May 2021 - version 4.0.1 was released on 19 May 2021 with the bug fix
https://github.com/huggingface/datasets/issues/2489
[]
null
2,489
false
Set configurable downloaded datasets path
Part of #2480.
https://github.com/huggingface/datasets/pull/2488
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2488", "html_url": "https://github.com/huggingface/datasets/pull/2488", "diff_url": "https://github.com/huggingface/datasets/pull/2488.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2488.patch", "merged_at": "2021-06-14T08:29...
2,488
true
Set configurable extracted datasets path
Part of #2480.
https://github.com/huggingface/datasets/pull/2487
[ "Let me push a small fix... ๐Ÿ˜‰ ", "Thanks !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2487", "html_url": "https://github.com/huggingface/datasets/pull/2487", "diff_url": "https://github.com/huggingface/datasets/pull/2487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2487.patch", "merged_at": "2021-06-14T09:02...
2,487
true
Add Rico Dataset
Hi there! I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib. 1) There ...
https://github.com/huggingface/datasets/pull/2486
[ "Hi ! Thanks for adding this dataset :)\r\n\r\nRegarding your questions:\r\n1. We can have them as different configuration of the `rico` dataset\r\n2. Yes please use the path to the image and not open the image directly, so that we can let users open the image one at at time during training if they want to for exam...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2486", "html_url": "https://github.com/huggingface/datasets/pull/2486", "diff_url": "https://github.com/huggingface/datasets/pull/2486.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2486.patch", "merged_at": null }
2,486
true
Implement layered building
As discussed with @stas00 and @lhoestq (see also here https://github.com/huggingface/datasets/issues/2481#issuecomment-859712190): > My suggestion for this would be to have this enabled by default. > > Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered b...
https://github.com/huggingface/datasets/issues/2485
[]
null
2,485
false
Implement loading a dataset builder
As discussed with @stas00 and @lhoestq, this would allow things like: ```python from datasets import load_dataset_builder dataset_name = "openwebtext" builder = load_dataset_builder(dataset_name) print(builder.cache_dir) ```
https://github.com/huggingface/datasets/issues/2484
[ "#self-assign" ]
null
2,484
false
Use gc.collect only when needed to avoid slow downs
In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482) However calling gc.collect too often causes significant slow downs (the CI run time doubled). So I just m...
https://github.com/huggingface/datasets/pull/2483
[ "I continue thinking that the origin of the issue has to do with tqdm (and not with Arrow): this issue only arises for version 4.50.0 (and later) of tqdm, not for previous versions of tqdm.\r\n\r\nMy guess is that tqdm made a change from version 4.50.0 that does not properly release the iterable. ", "FR" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2483", "html_url": "https://github.com/huggingface/datasets/pull/2483", "diff_url": "https://github.com/huggingface/datasets/pull/2483.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2483.patch", "merged_at": "2021-06-11T15:31...
2,483
true
Allow to use tqdm>=4.50.0
We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232)) They were due to open arrow files not properly closed by pyarrow. Since https://github.com/huggin...
https://github.com/huggingface/datasets/pull/2482
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2482", "html_url": "https://github.com/huggingface/datasets/pull/2482", "diff_url": "https://github.com/huggingface/datasets/pull/2482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2482.patch", "merged_at": "2021-06-11T15:11...
2,482
true
Delete extracted files to save disk space
As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user.
https://github.com/huggingface/datasets/issues/2481
[ "My suggestion for this would be to have this enabled by default.\r\n\r\nPlus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:\r\n\r\n1. uncompress a handful of files via a generator enough to generate one arrow file...
null
2,481
false
Set download/extracted paths configurable
As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives. TODO: - [x] Set configurable extracted datasets path: #2487 - [x] Set configurable downloaded datasets path: #2488 - [ ] Set configurable "incomplete" datasets path?
https://github.com/huggingface/datasets/issues/2480
[ "For example to be able to send uncompressed and temp build files to another volume/partition, so that the user gets the minimal disk usage on their primary setup - and ends up with just the downloaded compressed data + arrow files, but outsourcing the huge files and building to another partition. e.g. on JZ there...
null
2,480
false
โŒ load_datasets โŒ
https://github.com/huggingface/datasets/pull/2479
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2479", "html_url": "https://github.com/huggingface/datasets/pull/2479", "diff_url": "https://github.com/huggingface/datasets/pull/2479.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2479.patch", "merged_at": "2021-06-11T14:46...
2,479
true
Create release script
Create a script so that releases can be done automatically (as done in `transformers`).
https://github.com/huggingface/datasets/issues/2478
[]
null
2,478
false
Fix docs custom stable version
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
https://github.com/huggingface/datasets/pull/2477
[ "I see that @lhoestq overlooked this PR with his commit 07e2b05. ๐Ÿ˜ข \r\n\r\nI'm adding a script so that this issue does not happen again.\r\n", "For the moment, the script only includes `update_custom_js`, but in a follow-up PR I will include all the required steps to make a package release.", "I think we just ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2477", "html_url": "https://github.com/huggingface/datasets/pull/2477", "diff_url": "https://github.com/huggingface/datasets/pull/2477.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2477.patch", "merged_at": "2021-06-14T08:20...
2,477
true
Add TimeDial
Dataset: https://github.com/google-research-datasets/TimeDial To-Do: Update README.md and add YAML tags
https://github.com/huggingface/datasets/pull/2476
[ "Hi @lhoestq,\r\nI've pushed the updated README and tags. Let me know if anything is missing/needs some improvement!\r\n\r\n~PS. I don't know why it's not triggering the build~" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2476", "html_url": "https://github.com/huggingface/datasets/pull/2476", "diff_url": "https://github.com/huggingface/datasets/pull/2476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2476.patch", "merged_at": "2021-07-30T12:57...
2,476
true
Issue in timit_asr database
## Describe the bug I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows). I am using the next code line dataset = load_dataset(โ€œtimit_asrโ€, split=โ€œtestโ€).shuffle().select(range(10)) The above code result with the same sentence duplicated ten times. It al...
https://github.com/huggingface/datasets/issues/2475
[ "This bug was fixed in #1995. Upgrading datasets to version 1.6 fixes the issue!", "Indeed was a fixed bug.\r\nWorks on version 1.8\r\nThanks " ]
null
2,475
false
cache_dir parameter for load_from_disk ?
**Is your feature request related to a problem? Please describe.** When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache...
https://github.com/huggingface/datasets/issues/2474
[ "Hi ! `load_from_disk` doesn't move the data. If you specify a local path to your mounted drive, then the dataset is going to be loaded directly from the arrow file in this directory. The cache files that result from `map` operations are also stored in the same directory by default.\r\n\r\nHowever note than writing...
null
2,474
false
Add Disfl-QA
Dataset: https://github.com/google-research-datasets/disfl-qa To-Do: Update README.md and add YAML tags
https://github.com/huggingface/datasets/pull/2473
[ "Sounds great! It'll make things easier for the user while accessing the dataset. I'll make some changes to the current file then.", "I've updated with the suggested changes. Updated the README, YAML tags as well (not sure of Size category tag as I couldn't pass the path of `dataset_infos.json` for this dataset)\...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2473", "html_url": "https://github.com/huggingface/datasets/pull/2473", "diff_url": "https://github.com/huggingface/datasets/pull/2473.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2473.patch", "merged_at": "2021-07-29T11:56...
2,473
true
Fix automatic generation of Zenodo DOI
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published". I have contacted Zenodo support to fix this issue. TODO: - [x] Check with Zenodo to fix the issue - [x] Check BibTeX entry is right
https://github.com/huggingface/datasets/issues/2472
[ "I have received a reply from Zenodo support:\r\n> We are currently investigating and fixing this issue related to GitHub releases. As soon as we have solved it we will reach back to you.", "Other repo maintainers had the same problem with Zenodo. \r\n\r\nThere is an open issue on their GitHub repo: zenodo/zenodo...
null
2,472
false
Fix PermissionError on Windows when using tqdm >=4.50.0
See: https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111 ``` PermissionError: [WinError 32] The process cannot access the file because it is being used by another process ```
https://github.com/huggingface/datasets/issues/2471
[]
null
2,471
false
Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`. I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti...
https://github.com/huggingface/datasets/issues/2470
[ "Hi ! It looks like the issue comes from pyarrow. What version of pyarrow are you using ? How did you install it ?", "Thank you for the quick reply! I have `pyarrow==4.0.0`, and I am installing with `pip`. It's not one of my explicit dependencies, so I assume it came along with something else.", "Could you tryi...
null
2,470
false
Bump tqdm version
https://github.com/huggingface/datasets/pull/2469
[ "i tried both the latest version of `tqdm` and the version required by `autonlp` - no luck with windows ๐Ÿ˜ž \r\n\r\nit's very weird that a progress bar would trigger these kind of errors, so i'll have a look to see if it's something unique to `datasets`", "Closing since this is now fixed in #2482 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2469", "html_url": "https://github.com/huggingface/datasets/pull/2469", "diff_url": "https://github.com/huggingface/datasets/pull/2469.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2469.patch", "merged_at": null }
2,469
true
Implement ClassLabel encoding in JSON loader
Close #2365.
https://github.com/huggingface/datasets/pull/2468
[ "No, nevermind @lhoestq. Thanks to you for your reviews!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2468", "html_url": "https://github.com/huggingface/datasets/pull/2468", "diff_url": "https://github.com/huggingface/datasets/pull/2468.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2468.patch", "merged_at": "2021-06-28T15:05...
2,468
true
change udpos features structure
The structure is change such that each example is a sentence The change is done for issues: #2061 #2444 Close #2061 , close #2444.
https://github.com/huggingface/datasets/pull/2466
[ "Let's add the tags in another PR. Thanks again !", "Close #2061 , close #2444." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2466", "html_url": "https://github.com/huggingface/datasets/pull/2466", "diff_url": "https://github.com/huggingface/datasets/pull/2466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2466.patch", "merged_at": "2021-06-16T10:41...
2,466
true
adding masahaner dataset
Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner @lhoestq , can you please review
https://github.com/huggingface/datasets/pull/2465
[ "Thank you for the review. ", "Thanks a lot for the corrections and comments. \r\n\r\nI have resolved point 2. The make style still throws some errors, please see below\r\n\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets/**/*.py metrics\r\n/bin/sh: 1: black: not found\r\nMakefile:13...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2465", "html_url": "https://github.com/huggingface/datasets/pull/2465", "diff_url": "https://github.com/huggingface/datasets/pull/2465.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2465.patch", "merged_at": "2021-06-14T14:59...
2,465
true
fix: adjusting indexing for the labels.
The labels index were mismatching the actual ones used in the dataset. Specifically `0` is used for `SUPPORTS` and `1` is used for `REFUTES` After this change, the `README.md` now reflects the content of `dataset_infos.json`. Signed-off-by: Matteo Manica <drugilsberg@gmail.com>
https://github.com/huggingface/datasets/pull/2464
[ "> Good catch ! Thanks for fixing it\r\n\r\nMy pleasure๐Ÿ™" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2464", "html_url": "https://github.com/huggingface/datasets/pull/2464", "diff_url": "https://github.com/huggingface/datasets/pull/2464.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2464.patch", "merged_at": "2021-06-09T09:10...
2,464
true
Fix proto_qa download link
Fixes #2459 Instead of updating the path, this PR fixes a commit hash as suggested by @lhoestq.
https://github.com/huggingface/datasets/pull/2463
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2463", "html_url": "https://github.com/huggingface/datasets/pull/2463", "diff_url": "https://github.com/huggingface/datasets/pull/2463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2463.patch", "merged_at": "2021-06-10T08:31...
2,463
true
Merge DatasetDict and Dataset
As discussed in #2424 and #2437 (please see there for detailed conversation): - It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset. - The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users. - A user expects...
https://github.com/huggingface/datasets/issues/2462
[]
null
2,462
false
Support sliced list arrays in cast
There is this issue in pyarrow: ```python import pyarrow as pa arr = pa.array([[i * 10] for i in range(4)]) arr.cast(pa.list_(pa.int32())) # works arr = arr.slice(1) arr.cast(pa.list_(pa.int32())) # fails # ArrowNotImplementedError("Casting sliced lists (non-zero offset) not yet implemented") ``` Howev...
https://github.com/huggingface/datasets/pull/2461
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2461", "html_url": "https://github.com/huggingface/datasets/pull/2461", "diff_url": "https://github.com/huggingface/datasets/pull/2461.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2461.patch", "merged_at": "2021-06-08T17:56...
2,461
true
Revert default in-memory for small datasets
Close #2458
https://github.com/huggingface/datasets/pull/2460
[ "Thank you for this welcome change guys!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2460", "html_url": "https://github.com/huggingface/datasets/pull/2460", "diff_url": "https://github.com/huggingface/datasets/pull/2460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2460.patch", "merged_at": "2021-06-08T17:55...
2,460
true
`Proto_qa` hosting seems to be broken
## Describe the bug The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now. @zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py` ## Steps to reproduce the bug ```python from datasets impo...
https://github.com/huggingface/datasets/issues/2459
[ "@VictorSanh , I think @mariosasko is already working on it. " ]
null
2,459
false
Revert default in-memory for small datasets
Users are reporting issues and confusion about setting default in-memory to True for small datasets. We see 2 clear use cases of Datasets: - the "canonical" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation) - some edge cases (speed benchmarks, inter...
https://github.com/huggingface/datasets/issues/2458
[ "cc: @krandiash (pinged in reverted PR)." ]
null
2,458
false
Add align_labels_with_mapping function
This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself. This will help us with the Hub evaluation, where we won't know in advance whether a model that...
https://github.com/huggingface/datasets/pull/2457
[ "@lhoestq i think this is ready for another review ๐Ÿ™‚ ", "@lhoestq thanks for the feedback - it's now integrated :) \r\n\r\ni also added a comment about sorting the input label IDs", "Created the PR here: https://github.com/huggingface/datasets/pull/2510", "> Thanks ! Looks all good now :)\r\n> \r\n> We will ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2457", "html_url": "https://github.com/huggingface/datasets/pull/2457", "diff_url": "https://github.com/huggingface/datasets/pull/2457.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2457.patch", "merged_at": "2021-06-17T09:56...
2,457
true
Fix cross-reference typos in documentation
Fix some minor typos in docs that avoid the creation of cross-reference links.
https://github.com/huggingface/datasets/pull/2456
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2456", "html_url": "https://github.com/huggingface/datasets/pull/2456", "diff_url": "https://github.com/huggingface/datasets/pull/2456.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2456.patch", "merged_at": "2021-06-08T17:41...
2,456
true
Update version in xor_tydi_qa.py
Fix #2449 @lhoestq Should I revert to the old `dummy/1.0.0` or delete it and keep only `dummy/1.1.0`?
https://github.com/huggingface/datasets/pull/2455
[ "Hi ! Thanks for updating the version\r\n\r\n> Should I revert to the old dummy/1.0.0 or delete it and keep only dummy/1.1.0?\r\n\r\nFeel free to delete the old dummy data files\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2455", "html_url": "https://github.com/huggingface/datasets/pull/2455", "diff_url": "https://github.com/huggingface/datasets/pull/2455.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2455.patch", "merged_at": "2021-06-14T15:35...
2,455
true
Rename config and environment variable for in memory max size
As discussed in #2409, both config and environment variable have been renamed. cc: @stas00, huggingface/transformers#12056
https://github.com/huggingface/datasets/pull/2454
[ "Thank you for the rename, @albertvillanova!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2454", "html_url": "https://github.com/huggingface/datasets/pull/2454", "diff_url": "https://github.com/huggingface/datasets/pull/2454.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2454.patch", "merged_at": "2021-06-07T20:43...
2,454
true
Keep original features order
When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not. I found this issue while working on #2366.
https://github.com/huggingface/datasets/pull/2453
[ "The arrow writer was supposing that the columns were always in the sorted order. I just pushed a fix to reorder the arrays accordingly to the schema. It was failing for many datasets like squad", "and obviously it broke everything", "Feel free to revert my commit. I can investigate this in the coming days", ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2453", "html_url": "https://github.com/huggingface/datasets/pull/2453", "diff_url": "https://github.com/huggingface/datasets/pull/2453.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2453.patch", "merged_at": "2021-06-15T15:43...
2,453
true
MRPC test set differences between torch and tensorflow datasets
## Describe the bug When using `load_dataset("glue", "mrpc")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of import...
https://github.com/huggingface/datasets/issues/2452
[ "Realized that `tensorflow_datasets` is not provided by Huggingface and should therefore raise the issue there." ]
null
2,452
false
Mention that there are no answers in adversarial_qa test set
As mention in issue https://github.com/huggingface/datasets/issues/2447, there are no answers in the test set
https://github.com/huggingface/datasets/pull/2451
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2451", "html_url": "https://github.com/huggingface/datasets/pull/2451", "diff_url": "https://github.com/huggingface/datasets/pull/2451.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2451.patch", "merged_at": "2021-06-07T08:34...
2,451
true
BLUE file not found
Hi, I'm having the following issue when I try to load the `blue` metric. ```shell import datasets metric = datasets.load_metric('blue') Traceback (most recent call last): File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module local...
https://github.com/huggingface/datasets/issues/2450
[ "Hi ! The `blue` metric doesn't exist, but the `bleu` metric does.\r\nYou can get the full list of metrics [here](https://github.com/huggingface/datasets/tree/master/metrics) or by running\r\n```python\r\nfrom datasets import list_metrics\r\n\r\nprint(list_metrics())\r\n```", "Ah, my mistake. Thanks for correctin...
null
2,450
false
Update `xor_tydi_qa` url to v1.1
The dataset is updated and the old url no longer works. So I updated it. I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`). > And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to ...
https://github.com/huggingface/datasets/pull/2449
[ "Just noticed while \r\n```load_dataset('local_path/datastes/xor_tydi_qa')``` works,\r\n```load_dataset('xor_tydi_qa')``` \r\noutputs an error: \r\n`\r\nFileNotFoundError: Couldn't find file at https://nlp.cs.washington.edu/xorqa/XORQA_site/data/xor_dev_retrieve_eng_span.jsonl\r\n`\r\n(the old url)\r\n\r\nI tired c...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2449", "html_url": "https://github.com/huggingface/datasets/pull/2449", "diff_url": "https://github.com/huggingface/datasets/pull/2449.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2449.patch", "merged_at": "2021-06-07T08:31...
2,449
true
Fix flores download link
https://github.com/huggingface/datasets/pull/2448
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2448", "html_url": "https://github.com/huggingface/datasets/pull/2448", "diff_url": "https://github.com/huggingface/datasets/pull/2448.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2448.patch", "merged_at": "2021-06-07T08:18...
2,448
true
dataset adversarial_qa has no answers in the "test" set
## Describe the bug When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta') ## Steps to reproduce the bug ``` from datasets import load_dataset examples ...
https://github.com/huggingface/datasets/issues/2447
[ "Hi ! I'm pretty sure that the answers are not made available for the test set on purpose because it is part of the DynaBench benchmark, for which you can submit your predictions on the website.\r\nIn any case we should mention this in the dataset card of this dataset.", "Makes sense, but not intuitive for someon...
null
2,447
false
`yelp_polarity` is broken
![image](https://user-images.githubusercontent.com/22514219/120828150-c4a35b00-c58e-11eb-8083-a537cee4dbb3.png)
https://github.com/huggingface/datasets/issues/2446
[ "```\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py\", line 332, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"/home/sasha/nlp-viewer/run.py\", line 233, in <module>\r\n configs = get_confs(option)\r\nFile \"/home/sasha/.local/shar...
null
2,446
false
Fix broken URLs for bn_hate_speech and covid_tweets_japanese
Closes #2388
https://github.com/huggingface/datasets/pull/2445
[ "Thanks ! To fix the CI you just have to rename the dummy data file in the dummy_data.zip files", "thanks for the tip with the dummy data - all fixed now!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2445", "html_url": "https://github.com/huggingface/datasets/pull/2445", "diff_url": "https://github.com/huggingface/datasets/pull/2445.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2445.patch", "merged_at": "2021-06-04T17:39...
2,445
true
Sentence Boundaries missing in Dataset: xtreme / udpos
I was browsing through annotation guidelines, as suggested by the datasets introduction. The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence Boundaries and Comments section](https://universaldepend...
https://github.com/huggingface/datasets/issues/2444
[ "Hi,\r\n\r\nThis is a known issue. More info on this issue can be found in #2061. If you are looking for an open-source contribution, there are step-by-step instructions in the linked issue that you can follow to fix it.", "Closed by #2466." ]
null
2,444
false
Some tests hang on Windows
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO thr...
https://github.com/huggingface/datasets/issues/2443
[ "Hi ! That would be nice indeed to at least have a warning, since we don't handle the max path length limit.\r\nAlso if we could have an error instead of an infinite loop I'm sure windows users will appreciate that", "Unfortunately, I know this problem very well... ๐Ÿ˜… \r\n\r\nI remember having proposed to throw a...
null
2,443
false
add english language tags for ~100 datasets
As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs. Note that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English...
https://github.com/huggingface/datasets/pull/2442
[ "Fixing the tags of all the datasets is out of scope for this PR so I'm merging even though the CI fails because of the missing tags" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2442", "html_url": "https://github.com/huggingface/datasets/pull/2442", "diff_url": "https://github.com/huggingface/datasets/pull/2442.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2442.patch", "merged_at": "2021-06-04T09:51...
2,442
true
DuplicatedKeysError on personal dataset
## Describe the bug Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script. Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')` Note ...
https://github.com/huggingface/datasets/issues/2441
[ "Hi ! In your dataset script you must be yielding examples like\r\n```python\r\nfor line in file:\r\n ...\r\n yield key, {...}\r\n```\r\n\r\nSince `datasets` 1.7.0 we enforce the keys to be unique.\r\nHowever it looks like your examples generator creates duplicate keys: at least two examples have key 0.\r\n\r...
null
2,441
false
Remove `extended` field from dataset tagger
## Describe the bug While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included: ``` dataset_name = 'arcd' @pytest.m...
https://github.com/huggingface/datasets/issues/2440
[ "The tagger also doesn't insert the value for the `size_categories` field automatically, so this should be fixed too", "Thanks for reporting. Indeed the `extended` tag doesn't exist. Not sure why we had that in the tagger.\r\nThe repo of the tagger is here if someone wants to give this a try: https://github.com/h...
null
2,440
false
Better error message when trying to access elements of a DatasetDict without specifying the split
As mentioned in #2437 it'd be nice to to have an indication to the users when they try to access an element of a DatasetDict without specifying the split name. cc @thomwolf
https://github.com/huggingface/datasets/pull/2439
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2439", "html_url": "https://github.com/huggingface/datasets/pull/2439", "diff_url": "https://github.com/huggingface/datasets/pull/2439.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2439.patch", "merged_at": "2021-06-07T08:54...
2,439
true
Fix NQ features loading: reorder fields of features to match nested fields order in arrow data
As mentioned in #2401, there is an issue when loading the features of `natural_questions` since the order of the nested fields in the features don't match. The order is important since it matters for the underlying arrow schema. To fix that I re-order the features based on the arrow schema: ```python inferred_fe...
https://github.com/huggingface/datasets/pull/2438
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2438", "html_url": "https://github.com/huggingface/datasets/pull/2438", "diff_url": "https://github.com/huggingface/datasets/pull/2438.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2438.patch", "merged_at": "2021-06-04T09:02...
2,438
true
Better error message when using the wrong load_from_disk
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
https://github.com/huggingface/datasets/pull/2437
[ "We also have other cases where people are lost between Dataset and DatasetDict, maybe let's gather and solve them all here?\r\n\r\nFor instance, I remember that some people thought they would request a single element of a split but are calling this on a DatasetDict. Maybe here also a better error message when the ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2437", "html_url": "https://github.com/huggingface/datasets/pull/2437", "diff_url": "https://github.com/huggingface/datasets/pull/2437.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2437.patch", "merged_at": "2021-06-08T18:03...
2,437
true
Update DatasetMetadata and ReadMe
This PR contains the changes discussed in #2395. **Edit**: In addition to those changes, I'll be updating the `ReadMe` as follows: Currently, `Section` has separate parsing and validation error lists. In `.validate()`, we add these lists to the final lists and throw errors. One way to make `ReadMe` consistent...
https://github.com/huggingface/datasets/pull/2436
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2436", "html_url": "https://github.com/huggingface/datasets/pull/2436", "diff_url": "https://github.com/huggingface/datasets/pull/2436.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2436.patch", "merged_at": "2021-06-14T13:23...
2,436
true
Insert Extractive QA templates for SQuAD-like datasets
This PR adds task templates for 9 SQuAD-like templates with the following properties: * 1 config * A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434) * Less than 20...
https://github.com/huggingface/datasets/pull/2435
[ "hi @lhoestq @SBrandeis i've now added the missing YAML tags, so this PR should be good to go :)", "urgh, the windows tests are failing because of encoding issues ๐Ÿ˜ข \r\n\r\n```\r\ndataset_name = 'squad_kor_v1'\r\n\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\n def test_...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2435", "html_url": "https://github.com/huggingface/datasets/pull/2435", "diff_url": "https://github.com/huggingface/datasets/pull/2435.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2435.patch", "merged_at": "2021-06-03T14:32...
2,435
true
Extend QuestionAnsweringExtractive template to handle nested columns
Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like: * `iapp_wiki_qa_squad` * `parsinlu_reading_comprehension` where the nested features differ with those from `squad` and trigger an `ArrowNot...
https://github.com/huggingface/datasets/issues/2434
[ "this is also the case for the following datasets and configurations:\r\n\r\n* `mlqa` with config `mlqa-translate-train.ar`\r\n\r\n", "The current task API is somewhat deprecated (we plan to align it with `train eval index` at some point), so I think we can close this issue." ]
null
2,434
false
Fix DuplicatedKeysError in adversarial_qa
Fixes #2431
https://github.com/huggingface/datasets/pull/2433
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2433", "html_url": "https://github.com/huggingface/datasets/pull/2433", "diff_url": "https://github.com/huggingface/datasets/pull/2433.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2433.patch", "merged_at": "2021-06-01T08:52...
2,433
true
Fix CI six installation on linux
For some reason we end up with this error in the linux CI when running pip install .[tests] ``` pip._vendor.resolvelib.resolvers.InconsistentCandidate: Provided candidate AlreadyInstalledCandidate(six 1.16.0 (/usr/local/lib/python3.6/site-packages)) does not satisfy SpecifierRequirement('six>1.9'), SpecifierRequireme...
https://github.com/huggingface/datasets/pull/2432
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2432", "html_url": "https://github.com/huggingface/datasets/pull/2432", "diff_url": "https://github.com/huggingface/datasets/pull/2432.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2432.patch", "merged_at": "2021-05-31T13:17...
2,432
true
DuplicatedKeysError when trying to load adversarial_qa
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python dataset = load_dataset('adversarial_qa', 'adversarialQA') ``` ## Expected results The dataset should be loaded into memory ## Actual results >DuplicatedKeysError: FAILURE TO GENERATE DATASET ...
https://github.com/huggingface/datasets/issues/2431
[ "Thanks for reporting !\r\n#2433 fixed the issue, thanks @mariosasko :)\r\n\r\nWe'll do a patch release soon of the library.\r\nIn the meantime, you can use the fixed version of adversarial_qa by adding `script_version=\"master\"` in `load_dataset`" ]
null
2,431
false