title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Fix/start token mask issue and update documentation
This pr fixes a couple bugs: 1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct 2) the documentation was not updated
https://github.com/huggingface/datasets/pull/4258
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Good catch ! Thanks :)\r\n> \r\n> Next time can you describe your fix in the Pull Request description please ?\r\n\r\nThanks. Also whoops, sorry about not being very descriptive. I updated the pull request description, and will kee...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4258", "html_url": "https://github.com/huggingface/datasets/pull/4258", "diff_url": "https://github.com/huggingface/datasets/pull/4258.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4258.patch", "merged_at": "2022-05-02T16:26...
4,258
true
Create metric card for Mahalanobis Distance
proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile:
https://github.com/huggingface/datasets/pull/4257
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4257", "html_url": "https://github.com/huggingface/datasets/pull/4257", "diff_url": "https://github.com/huggingface/datasets/pull/4257.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4257.patch", "merged_at": "2022-05-02T14:43...
4,257
true
Create metric card for MSE
Proposing a metric card for Mean Squared Error
https://github.com/huggingface/datasets/pull/4256
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4256", "html_url": "https://github.com/huggingface/datasets/pull/4256", "diff_url": "https://github.com/huggingface/datasets/pull/4256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4256.patch", "merged_at": "2022-05-02T14:48...
4,256
true
No google drive URL for pubmed_qa
I hosted the data files in https://huggingface.co/datasets/pubmed_qa. This is allowed because the data is under the MIT license. cc @stas00
https://github.com/huggingface/datasets/pull/4255
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI is failing because some sections are missing in the dataset card, but this is unrelated to this PR - Merging !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4255", "html_url": "https://github.com/huggingface/datasets/pull/4255", "diff_url": "https://github.com/huggingface/datasets/pull/4255.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4255.patch", "merged_at": "2022-04-29T16:18...
4,255
true
Replace data URL in SAMSum dataset and support streaming
This PR replaces data URL in SAMSum dataset: - original host (arxiv.org) does not allow HTTP Range requests - we have hosted the data on the Hub (license: CC BY-NC-ND 4.0) Moreover, it implements support for streaming. Fix #4146. Related to: #4236. CC: @severo
https://github.com/huggingface/datasets/pull/4254
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4254", "html_url": "https://github.com/huggingface/datasets/pull/4254", "diff_url": "https://github.com/huggingface/datasets/pull/4254.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4254.patch", "merged_at": "2022-04-29T16:26...
4,254
true
Create metric cards for mean IOU
Proposing a metric card for mIoU :rocket: sorry for spamming you with review requests, @albertvillanova ! :hugs:
https://github.com/huggingface/datasets/pull/4253
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4253", "html_url": "https://github.com/huggingface/datasets/pull/4253", "diff_url": "https://github.com/huggingface/datasets/pull/4253.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4253.patch", "merged_at": "2022-04-29T17:38...
4,253
true
Creating metric card for MAE
Initial proposal for MAE metric card
https://github.com/huggingface/datasets/pull/4252
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4252", "html_url": "https://github.com/huggingface/datasets/pull/4252", "diff_url": "https://github.com/huggingface/datasets/pull/4252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4252.patch", "merged_at": "2022-04-29T16:52...
4,252
true
Metric card for the XTREME-S dataset
Proposing a metric card for the XTREME-S dataset :hugs:
https://github.com/huggingface/datasets/pull/4251
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4251", "html_url": "https://github.com/huggingface/datasets/pull/4251", "diff_url": "https://github.com/huggingface/datasets/pull/4251.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4251.patch", "merged_at": "2022-04-29T16:38...
4,251
true
Bump PyArrow Version to 6
Fixes #4152 This PR updates the PyArrow version to 6 in setup.py, CI job files .circleci/config.yaml and .github/workflows/benchmarks.yaml files. This will fix ArrayND error which exists in pyarrow 5.
https://github.com/huggingface/datasets/pull/4250
[ "_The documentation is not available anymore as the PR was closed or merged._", "Updated meta.yaml as well. Thanks.", "I'm OK with bumping PyArrow to version 6 to match the version in Colab, but maybe a better solution would be to stop using extension types in our codebase to avoid similar issues.", "> but ma...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4250", "html_url": "https://github.com/huggingface/datasets/pull/4250", "diff_url": "https://github.com/huggingface/datasets/pull/4250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4250.patch", "merged_at": "2022-05-04T09:29...
4,250
true
Support streaming XGLUE dataset
Support streaming XGLUE dataset. Fix #4247. CC: @severo
https://github.com/huggingface/datasets/pull/4249
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4249", "html_url": "https://github.com/huggingface/datasets/pull/4249", "diff_url": "https://github.com/huggingface/datasets/pull/4249.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4249.patch", "merged_at": "2022-04-28T16:08...
4,249
true
conll2003 dataset loads original data.
## Describe the bug I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text. Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ? ## Steps to...
https://github.com/huggingface/datasets/issues/4248
[ "Thanks for reporting @sue99.\r\n\r\nUnfortunately. I'm not able to reproduce your problem:\r\n```python\r\nIn [1]: import datasets\r\n ...: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"conll2003\")\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n fea...
null
4,248
false
The data preview of XGLUE
It seems that something wrong with the data previvew of XGLUE
https://github.com/huggingface/datasets/issues/4247
[ "![image](https://user-images.githubusercontent.com/49108847/165700611-915b4343-766f-4b81-bdaa-b31950250f06.png)\r\n", "Thanks for reporting @czq1999.\r\n\r\nNote that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.\r\n\r\nThat is the case for XGLUE dataset (...
null
4,247
false
Support to load dataset with TSV files by passing only dataset name
This PR implements support to load a dataset (w/o script) containing TSV files by passing only the dataset name (no need to pass `sep='\t'`): ```python ds = load_dataset("dataset/name") ``` The refactoring allows for future builder kwargs customizations based on file extension. Related to #4238.
https://github.com/huggingface/datasets/pull/4246
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4246", "html_url": "https://github.com/huggingface/datasets/pull/4246", "diff_url": "https://github.com/huggingface/datasets/pull/4246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4246.patch", "merged_at": "2022-05-06T08:14...
4,246
true
Add code examples for DatasetDict
This PR adds code examples for `DatasetDict` in the API reference :)
https://github.com/huggingface/datasets/pull/4245
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4245", "html_url": "https://github.com/huggingface/datasets/pull/4245", "diff_url": "https://github.com/huggingface/datasets/pull/4245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4245.patch", "merged_at": "2022-04-29T18:13...
4,245
true
task id update
changed multi input text classification as task id instead of category
https://github.com/huggingface/datasets/pull/4244
[ "Reverted the multi-input-text-classification tag from task_categories and added it as task_ids @lhoestq ", "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4244", "html_url": "https://github.com/huggingface/datasets/pull/4244", "diff_url": "https://github.com/huggingface/datasets/pull/4244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4244.patch", "merged_at": "2022-05-04T10:36...
4,244
true
WIP: Initial shades loading script and readme
null
https://github.com/huggingface/datasets/pull/4243
[ "Thanks for your contribution, @shayne-longpre.\r\n\r\nAre you still interested in adding this dataset? As we are transferring the dataset scripts from this GitHub repo, we would recommend you to add this to the Hugging Face Hub: https://huggingface.co/datasets" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4243", "html_url": "https://github.com/huggingface/datasets/pull/4243", "diff_url": "https://github.com/huggingface/datasets/pull/4243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4243.patch", "merged_at": null }
4,243
true
Update auth when mirroring datasets on the hub
We don't need to use extraHeaders anymore for rate limits anymore. Anyway extraHeaders was not working with git LFS because it was passing the wrong auth to S3.
https://github.com/huggingface/datasets/pull/4242
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4242", "html_url": "https://github.com/huggingface/datasets/pull/4242", "diff_url": "https://github.com/huggingface/datasets/pull/4242.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4242.patch", "merged_at": "2022-04-27T17:30...
4,242
true
NonMatchingChecksumError when attempting to download GLUE
## Describe the bug I am trying to download the GLUE dataset from the NLP module but get an error (see below). ## Steps to reproduce the bug ```python import nlp nlp.__version__ # '0.2.0' nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ``` ## Expected results I expect the dataset to ...
https://github.com/huggingface/datasets/issues/4241
[ "Hi :)\r\n\r\nI think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:\r\n\r\n```py\r\npip install -U datasets\r\n```\r\n\r\nThen you can download:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nds = load_data...
null
4,241
false
Fix yield for crd3
Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example Modified the features accordingly ``` "turns": [ { "names": datasets.features.Sequence(datasets.Value("string")), "utterances": ...
https://github.com/huggingface/datasets/pull/4240
[ "I don't think you need to generate new dummy data, since they're in the same format as the original data.\r\n\r\nThe CI is failing because of this error:\r\n```python\r\n> turn[\"names\"] = turn[\"NAMES\"]\r\nE TypeError: tuple indices must be integers or slices, not str...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4240", "html_url": "https://github.com/huggingface/datasets/pull/4240", "diff_url": "https://github.com/huggingface/datasets/pull/4240.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4240.patch", "merged_at": "2022-04-29T12:41...
4,240
true
Small fixes in ROC AUC docs
The list of use cases did not render on GitHub with the prepended spacing. Additionally, some typo's we're fixed.
https://github.com/huggingface/datasets/pull/4239
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4239", "html_url": "https://github.com/huggingface/datasets/pull/4239", "diff_url": "https://github.com/huggingface/datasets/pull/4239.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4239.patch", "merged_at": "2022-05-02T13:22...
4,239
true
Dataset caching policy
## Describe the bug I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error ``` [/usr/local/lib/python3.7/dist-packages/d...
https://github.com/huggingface/datasets/issues/4238
[ "Hi @loretoparisi, thanks for reporting.\r\n\r\nThere is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode=\"force_redownload\")`.\r\n\r\nPlease, let me know if this fixes your problem.\r\n\r\nI can confirm you that y...
null
4,238
false
Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
https://github.com/huggingface/datasets/issues/4237
[ "Thanks for reporting. I understand it's an error in the dataset script. To reproduce:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> split_names = ds.get_dataset_split_names(\"mozilla-foundation/common_voice_8_0\", use_auth_token=\"**********\")\r\nDownloading builder script: 100%|███████████████████████████...
null
4,237
false
Replace data URL in big_patent dataset and support streaming
This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub. Moreover, this PR makes the dataset streamable. Fix #4217.
https://github.com/huggingface/datasets/pull/4236
[ "_The documentation is not available anymore as the PR was closed or merged._", "I first uploaded the data files to the Hub: I think it is a good option because we have git lfs to track versions and changes. Moreover people will be able to make PRs to propose updates on the data files.\r\n- I would have preferred...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4236", "html_url": "https://github.com/huggingface/datasets/pull/4236", "diff_url": "https://github.com/huggingface/datasets/pull/4236.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4236.patch", "merged_at": "2022-05-02T18:21...
4,236
true
How to load VERY LARGE dataset?
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of da...
https://github.com/huggingface/datasets/issues/4235
[ "The `Trainer` support `IterableDataset`, not just datasets." ]
null
4,235
false
Autoeval config
Added autoeval config to imdb as pilot
https://github.com/huggingface/datasets/pull/4234
[ "_The documentation is not available anymore as the PR was closed or merged._", "Related to: https://github.com/huggingface/autonlp-backend/issues/414 and https://github.com/huggingface/autonlp-backend/issues/424", "The tests are failing due to the changed metadata:\r\n\r\n```\r\ngot an unexpected keyword argum...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4234", "html_url": "https://github.com/huggingface/datasets/pull/4234", "diff_url": "https://github.com/huggingface/datasets/pull/4234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4234.patch", "merged_at": "2022-05-05T18:20...
4,234
true
Autoeval
null
https://github.com/huggingface/datasets/pull/4233
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4233). All of your documentation changes will be reflected on that endpoint." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4233", "html_url": "https://github.com/huggingface/datasets/pull/4233", "diff_url": "https://github.com/huggingface/datasets/pull/4233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4233.patch", "merged_at": null }
4,233
true
adding new tag to tasks.json and modified for existing datasets
null
https://github.com/huggingface/datasets/pull/4232
[ "_The documentation is not available anymore as the PR was closed or merged._", "closing in favor of https://github.com/huggingface/datasets/pull/4244" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4232", "html_url": "https://github.com/huggingface/datasets/pull/4232", "diff_url": "https://github.com/huggingface/datasets/pull/4232.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4232.patch", "merged_at": null }
4,232
true
Fix invalid url to CC-Aligned dataset
The CC-Aligned dataset url has changed to https://data.statmt.org/cc-aligned/, the old address http://www.statmt.org/cc-aligned/ is no longer valid
https://github.com/huggingface/datasets/pull/4231
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4231", "html_url": "https://github.com/huggingface/datasets/pull/4231", "diff_url": "https://github.com/huggingface/datasets/pull/4231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4231.patch", "merged_at": "2022-05-16T16:53...
4,231
true
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
![image](https://user-images.githubusercontent.com/37113676/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png) But on huggingface datasets: ![image](https://user-images.githubusercontent.com/37113676/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png) Where is the German data?
https://github.com/huggingface/datasets/issues/4230
[ "Thanks for reporting @beyondguo.\r\n\r\nIndeed, we generate this dataset from this raw data file URL: https://data.deepai.org/conll2003.zip\r\nAnd that URL only contains the English version.", "The German data requires payment\r\n\r\nThe [original task page](https://www.clips.uantwerpen.be/conll2003/ner/) states...
null
4,230
false
new task tag
multi-input-text-classification tag for classification datasets that take more than one input
https://github.com/huggingface/datasets/pull/4229
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4229", "html_url": "https://github.com/huggingface/datasets/pull/4229", "diff_url": "https://github.com/huggingface/datasets/pull/4229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4229.patch", "merged_at": null }
4,229
true
new task tag
multi-input-text-classification tag for classification datasets that take more than one input
https://github.com/huggingface/datasets/pull/4228
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4228", "html_url": "https://github.com/huggingface/datasets/pull/4228", "diff_url": "https://github.com/huggingface/datasets/pull/4228.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4228.patch", "merged_at": null }
4,228
true
Add f1 metric card, update docstring in py file
null
https://github.com/huggingface/datasets/pull/4227
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4227", "html_url": "https://github.com/huggingface/datasets/pull/4227", "diff_url": "https://github.com/huggingface/datasets/pull/4227.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4227.patch", "merged_at": "2022-05-03T12:43...
4,227
true
Add pearsonr mc, update functionality to match the original docs
- adds pearsonr metric card - adds ability to return p-value - p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value.
https://github.com/huggingface/datasets/pull/4226
[ "_The documentation is not available anymore as the PR was closed or merged._", "thank you @lhoestq!! :hugs: " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4226", "html_url": "https://github.com/huggingface/datasets/pull/4226", "diff_url": "https://github.com/huggingface/datasets/pull/4226.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4226.patch", "merged_at": "2022-05-03T17:02...
4,226
true
autoeval config
add train eval index for autoeval
https://github.com/huggingface/datasets/pull/4225
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4225", "html_url": "https://github.com/huggingface/datasets/pull/4225", "diff_url": "https://github.com/huggingface/datasets/pull/4225.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4225.patch", "merged_at": null }
4,225
true
autoeval config
add train eval index for autoeval
https://github.com/huggingface/datasets/pull/4224
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4224", "html_url": "https://github.com/huggingface/datasets/pull/4224", "diff_url": "https://github.com/huggingface/datasets/pull/4224.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4224.patch", "merged_at": null }
4,224
true
Add Accuracy Metric Card
- adds accuracy metric card - updates docstring in accuracy.py - adds .json file with metric card and docstring information
https://github.com/huggingface/datasets/pull/4223
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4223", "html_url": "https://github.com/huggingface/datasets/pull/4223", "diff_url": "https://github.com/huggingface/datasets/pull/4223.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4223.patch", "merged_at": "2022-05-03T14:20...
4,223
true
Fix description links in dataset cards
I noticed many links were not properly displayed (only text, no link) on the Hub because of wrong syntax, e.g.: https://huggingface.co/datasets/big_patent This PR fixes all description links in dataset cards.
https://github.com/huggingface/datasets/pull/4222
[ "_The documentation is not available anymore as the PR was closed or merged._", "Non passing tests are due to other pre-existing errors in dataset cards: not related to this PR." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4222", "html_url": "https://github.com/huggingface/datasets/pull/4222", "diff_url": "https://github.com/huggingface/datasets/pull/4222.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4222.patch", "merged_at": "2022-04-26T16:52...
4,222
true
Dictionary Feature
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something? Thank you in advance.
https://github.com/huggingface/datasets/issues/4221
[ "Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n...
null
4,221
false
Altered faiss installation comment
null
https://github.com/huggingface/datasets/pull/4220
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi ! Can you explain why this change is needed ?", "Facebook recommends installing FAISS using conda (https://github.com/facebookresearch/faiss/blob/main/INSTALL.md). pip does not seem to have the latest version of FAISS. The lates...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4220", "html_url": "https://github.com/huggingface/datasets/pull/4220", "diff_url": "https://github.com/huggingface/datasets/pull/4220.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4220.patch", "merged_at": "2022-05-09T17:22...
4,220
true
Add F1 Metric Card
null
https://github.com/huggingface/datasets/pull/4219
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4219", "html_url": "https://github.com/huggingface/datasets/pull/4219", "diff_url": "https://github.com/huggingface/datasets/pull/4219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4219.patch", "merged_at": null }
4,219
true
Make code for image downloading from image urls cacheable
Fix #4199
https://github.com/huggingface/datasets/pull/4218
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4218", "html_url": "https://github.com/huggingface/datasets/pull/4218", "diff_url": "https://github.com/huggingface/datasets/pull/4218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4218.patch", "merged_at": "2022-04-26T13:38...
4,218
true
Big_Patent dataset broken
## Dataset viewer issue for '*big_patent*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)* *Unable to view because it says FileNotFound, also cannot download it through the python API* Am I the one who added this dataset ? No
https://github.com/huggingface/datasets/issues/4217
[ "Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.\r\n\r\nSee related issues: https://github.com/huggingface/datasets/issues?q=is%3Aissue+is%3Aopen+drive.google.com\r\n\r\nTo quote [@lhoestq](https://gith...
null
4,217
false
Avoid recursion error in map if example is returned as dict value
I noticed this bug while answering [this question](https://discuss.huggingface.co/t/correct-way-to-create-a-dataset-from-a-csv-file/15686/11?u=mariosasko). This code replicates the bug: ```python from datasets import Dataset dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]}) dset.map(lambda ex: ...
https://github.com/huggingface/datasets/pull/4216
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4216", "html_url": "https://github.com/huggingface/datasets/pull/4216", "diff_url": "https://github.com/huggingface/datasets/pull/4216.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4216.patch", "merged_at": "2022-05-04T17:12...
4,216
true
Add `drop_last_batch` to `IterableDataset.map`
Addresses this comment: https://github.com/huggingface/datasets/pull/3801#pullrequestreview-901736921
https://github.com/huggingface/datasets/pull/4215
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4215", "html_url": "https://github.com/huggingface/datasets/pull/4215", "diff_url": "https://github.com/huggingface/datasets/pull/4215.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4215.patch", "merged_at": "2022-05-03T15:48...
4,215
true
Skip checksum computation in Imagefolder by default
Avoids having to set `ignore_verifications=True` in `load_dataset("imagefolder", ...)` to skip checksum verification and speed up loading. The user can still pass `DownloadConfig(record_checksums=True)` to not skip this part.
https://github.com/huggingface/datasets/pull/4214
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4214", "html_url": "https://github.com/huggingface/datasets/pull/4214", "diff_url": "https://github.com/huggingface/datasets/pull/4214.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4214.patch", "merged_at": "2022-05-03T15:21...
4,214
true
ETT time series dataset
Ready for review.
https://github.com/huggingface/datasets/pull/4213
[ "_The documentation is not available anymore as the PR was closed or merged._", "thank you!\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4213", "html_url": "https://github.com/huggingface/datasets/pull/4213", "diff_url": "https://github.com/huggingface/datasets/pull/4213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4213.patch", "merged_at": "2022-05-05T12:10...
4,213
true
[Common Voice] Make sure bytes are correctly deleted if `path` exists
`path` should be set to local path inside audio feature if exist so that bytes can correctly be deleted.
https://github.com/huggingface/datasets/pull/4212
[ "_The documentation is not available anymore as the PR was closed or merged._", "cool that you noticed that we store unnecessary bytes again :D " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4212", "html_url": "https://github.com/huggingface/datasets/pull/4212", "diff_url": "https://github.com/huggingface/datasets/pull/4212.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4212.patch", "merged_at": "2022-04-26T22:48...
4,212
true
DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there, I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same. Dataset and code...
https://github.com/huggingface/datasets/issues/4211
[ "Hi @pietrolesci, thanks for reporting.\r\n\r\nPlease note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n\r\nTo handle sub-datasets with different features, we u...
null
4,211
false
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed ...
https://github.com/huggingface/datasets/issues/4210
[ "Hi! Casting class labels from strings is currently not supported in the CSV loader, but you can get the same result with an additional map as follows:\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\...
null
4,210
false
Add CMU MoCap Dataset
Resolves #3457 Dataset Request : Add CMU Graphics Lab Motion Capture dataset [#3457](https://github.com/huggingface/datasets/issues/3457) This PR adds the CMU MoCap Dataset. The authors didn't respond even after multiple follow ups, so I ended up crawling the website to get categories, subcategories and descrip...
https://github.com/huggingface/datasets/pull/4208
[ "_The documentation is not available anymore as the PR was closed or merged._", "- Updated the readme.\r\n- Added dummy_data.zip and ran the all the tests.\r\n\r\nThe dataset works for \"asf/amc\" and \"avi\" formats which have a single download link for the complete dataset. But \"c3d\" and \"mpg\" have multiple...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4208", "html_url": "https://github.com/huggingface/datasets/pull/4208", "diff_url": "https://github.com/huggingface/datasets/pull/4208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4208.patch", "merged_at": null }
4,208
true
[Minor edit] Fix typo in class name
Typo: `datasets.DatsetDict` -> `datasets.DatasetDict`
https://github.com/huggingface/datasets/pull/4207
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4207", "html_url": "https://github.com/huggingface/datasets/pull/4207", "diff_url": "https://github.com/huggingface/datasets/pull/4207.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4207.patch", "merged_at": "2022-05-05T13:17...
4,207
true
Add Nerval Metric
This PR adds readme.md and ner_val.py to metrics. Nerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level.
https://github.com/huggingface/datasets/pull/4206
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4206", "html_url": "https://github.com/huggingface/datasets/pull/4206", "diff_url": "https://github.com/huggingface/datasets/pull/4206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4206.patch", "merged_at": null }
4,206
true
Fix `convert_file_size_to_int` for kilobits and megabits
Minor change to fully align this function with the recent change in Transformers (https://github.com/huggingface/transformers/pull/16891)
https://github.com/huggingface/datasets/pull/4205
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4205", "html_url": "https://github.com/huggingface/datasets/pull/4205", "diff_url": "https://github.com/huggingface/datasets/pull/4205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4205.patch", "merged_at": "2022-05-03T15:21...
4,205
true
Add Recall Metric Card
What this PR mainly does: - add metric card for recall metric - update docs in recall python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted met...
https://github.com/huggingface/datasets/pull/4204
[ "_The documentation is not available anymore as the PR was closed or merged._", "This looks good to me! " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4204", "html_url": "https://github.com/huggingface/datasets/pull/4204", "diff_url": "https://github.com/huggingface/datasets/pull/4204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4204.patch", "merged_at": "2022-05-03T13:16...
4,204
true
Add Precision Metric Card
What this PR mainly does: - add metric card for precision metric - update docs in precision python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatt...
https://github.com/huggingface/datasets/pull/4203
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4203", "html_url": "https://github.com/huggingface/datasets/pull/4203", "diff_url": "https://github.com/huggingface/datasets/pull/4203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4203.patch", "merged_at": "2022-05-03T14:16...
4,203
true
Fix some type annotation in doc
null
https://github.com/huggingface/datasets/pull/4202
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4202", "html_url": "https://github.com/huggingface/datasets/pull/4202", "diff_url": "https://github.com/huggingface/datasets/pull/4202.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4202.patch", "merged_at": "2022-04-22T14:56...
4,202
true
Update GH template for dataset viewer issues
Update template to use new issue forms instead. With this PR we can check if this new feature is useful for us. Once validated, we can update the other templates. CC: @severo
https://github.com/huggingface/datasets/pull/4201
[ "_The documentation is not available anymore as the PR was closed or merged._", "You can see rendering at: https://github.com/huggingface/datasets/blob/6b48fedbdafe12a42c7b6edcecc32820af1a4822/.github/ISSUE_TEMPLATE/dataset-viewer.yml" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4201", "html_url": "https://github.com/huggingface/datasets/pull/4201", "diff_url": "https://github.com/huggingface/datasets/pull/4201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4201.patch", "merged_at": "2022-04-26T08:45...
4,201
true
Add to docs how to load from local script
This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it. Related to #4192 CC: @stevhliu
https://github.com/huggingface/datasets/pull/4200
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4200", "html_url": "https://github.com/huggingface/datasets/pull/4200", "diff_url": "https://github.com/huggingface/datasets/pull/4200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4200.patch", "merged_at": "2022-04-23T05:47...
4,200
true
Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch. ...
https://github.com/huggingface/datasets/issues/4199
[ "Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache", "Hi @apsdehal! Can you verify that replacing\r\n```python\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\...
null
4,199
false
There is no dataset
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
https://github.com/huggingface/datasets/issues/4198
[]
null
4,198
false
Add remove_columns=True
This should fix all the issue we have with in place operations in mapping functions. This is crucial as where we do some weird things like: ``` def apply(batch): batch_size = len(batch["id"]) batch["text"] = ["potato" for _ range(batch_size)] return {} # Columns are: {"id": int} dset.map(apply, bat...
https://github.com/huggingface/datasets/pull/4197
[ "_The documentation is not available anymore as the PR was closed or merged._", "Any reason why we can't just do `[inputs.copy()]` in this line for in-place operations to not have effects anymore:\r\nhttps://github.com/huggingface/datasets/blob/bf432011ff9155a5bc16c03956bc63e514baf80d/src/datasets/arrow_dataset.p...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4197", "html_url": "https://github.com/huggingface/datasets/pull/4197", "diff_url": "https://github.com/huggingface/datasets/pull/4197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4197.patch", "merged_at": null }
4,197
true
Embed image and audio files in `save_to_disk`
Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files. Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a b...
https://github.com/huggingface/datasets/issues/4196
[]
null
4,196
false
Support lists of multi-dimensional numpy arrays
Fix #4191. CC: @SaulLu
https://github.com/huggingface/datasets/pull/4194
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4194", "html_url": "https://github.com/huggingface/datasets/pull/4194", "diff_url": "https://github.com/huggingface/datasets/pull/4194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4194.patch", "merged_at": "2022-05-12T15:08...
4,194
true
Document save_to_disk and push_to_hub on images and audio files
Following https://github.com/huggingface/datasets/pull/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data.
https://github.com/huggingface/datasets/pull/4193
[ "_The documentation is not available anymore as the PR was closed or merged._", "Good catch, I updated the docstrings" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4193", "html_url": "https://github.com/huggingface/datasets/pull/4193", "diff_url": "https://github.com/huggingface/datasets/pull/4193.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4193.patch", "merged_at": "2022-04-22T09:49...
4,193
true
load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last): File "/home/gs603/ahf/pretrained/model.py", line 48, in <module> dataset = load_dataset("json",data_files="dataset/dataset_infos.json") File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset **config_kwa...
https://github.com/huggingface/datasets/issues/4192
[ "Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json...
null
4,192
false
feat: create an `Array3D` column from a list of arrays of dimension 2
**Is your feature request related to a problem? Please describe.** It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1. To illustrate my proposal, let's take the...
https://github.com/huggingface/datasets/issues/4191
[ "Hi @SaulLu, thanks for your proposal.\r\n\r\nJust I got a bit confused about the dimensions...\r\n- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1\r\n- However, you give an example of creating an `Array2D` from arrays of dimension 2:\r\n - the values of `da...
null
4,191
false
Deprecate `shard_size` in `push_to_hub` in favor of `max_shard_size`
This PR adds a `max_shard_size` param to `push_to_hub` and deprecates `shard_size` in favor of this new param to have a more descriptive name (a shard has at most the `shard_size` bytes in `push_to_hub`) for the param and to align the API with [Transformers](https://github.com/huggingface/transformers/blob/ff06b1779173...
https://github.com/huggingface/datasets/pull/4190
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4190", "html_url": "https://github.com/huggingface/datasets/pull/4190", "diff_url": "https://github.com/huggingface/datasets/pull/4190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4190.patch", "merged_at": "2022-04-22T13:52...
4,190
true
Document how to use FAISS index for special operations
Document how to use FAISS index for special operations, by accessing the index itself. Close #4029.
https://github.com/huggingface/datasets/pull/4189
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4189", "html_url": "https://github.com/huggingface/datasets/pull/4189", "diff_url": "https://github.com/huggingface/datasets/pull/4189.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4189.patch", "merged_at": "2022-05-06T08:35...
4,189
true
Support streaming cnn_dailymail dataset
Support streaming cnn_dailymail dataset. Fix #3969. CC: @severo
https://github.com/huggingface/datasets/pull/4188
[ "_The documentation is not available anymore as the PR was closed or merged._", "Did you run the `datasets-cli` command before merging to make sure you generate all the examples ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4188", "html_url": "https://github.com/huggingface/datasets/pull/4188", "diff_url": "https://github.com/huggingface/datasets/pull/4188.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4188.patch", "merged_at": "2022-04-20T15:52...
4,188
true
Don't duplicate data when encoding audio or image
Right now if you pass both the `bytes` and a local `path` for audio or image data, then the `bytes` are unnecessarily written in the Arrow file, while we could just keep the local `path`. This PR discards the `bytes` when the audio or image file exists locally. In particular it's common for audio datasets builder...
https://github.com/huggingface/datasets/pull/4187
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm not familiar with the concept of streaming vs non-streaming in HF datasets. I just wonder that you have the distinction here. Why doesn't it work to always make use of `bytes`? \"using a local file - which is often required for a...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4187", "html_url": "https://github.com/huggingface/datasets/pull/4187", "diff_url": "https://github.com/huggingface/datasets/pull/4187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4187.patch", "merged_at": "2022-04-21T09:10...
4,187
true
Fix outdated docstring about default dataset config
null
https://github.com/huggingface/datasets/pull/4186
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4186", "html_url": "https://github.com/huggingface/datasets/pull/4186", "diff_url": "https://github.com/huggingface/datasets/pull/4186.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4186.patch", "merged_at": "2022-04-22T12:48...
4,186
true
Librispeech documentation, clarification on format
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53 > Note that in order to limit the required storage for preparing this dataset, the audio > is stored in the .flac format and is not converted to a float32 array. To convert, the audi...
https://github.com/huggingface/datasets/issues/4185
[ "(@patrickvonplaten )", "Also cc @lhoestq here", "The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is d...
null
4,185
false
[Librispeech] Add 'all' config
Add `"all"` config to Librispeech Closed #4179
https://github.com/huggingface/datasets/pull/4184
[ "Fix https://github.com/huggingface/datasets/issues/4179", "_The documentation is not available anymore as the PR was closed or merged._", "Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n\r\nAnd to get the subsets, I do st...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4184", "html_url": "https://github.com/huggingface/datasets/pull/4184", "diff_url": "https://github.com/huggingface/datasets/pull/4184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4184.patch", "merged_at": "2022-04-22T09:45...
4,184
true
Document librispeech configs
Added an example of how to load one config or the other
https://github.com/huggingface/datasets/pull/4183
[ "I think the main purpose of #4179 was how to be able to load both configs into one, so should we maybe add this part of the code: https://github.com/huggingface/datasets/issues/4179#issuecomment-1102383717 \r\n\r\nto the doc? \r\n\r\nActually @lhoestq would this work given that they have different split names: htt...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4183", "html_url": "https://github.com/huggingface/datasets/pull/4183", "diff_url": "https://github.com/huggingface/datasets/pull/4183.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4183.patch", "merged_at": null }
4,183
true
Zenodo.org download is not responding
## Describe the bug Source download_url from zenodo.org does not respond. `_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"` Other datasets also use zenodo.org to store data and they cannot be downloaded as well. It would be better to actually use more reliable way to store original ...
https://github.com/huggingface/datasets/issues/4182
[ "[Off topic but related: Is the uptime of S3 provably better than Zenodo's?]", "Hi @dkajtoch, please note that at HuggingFace we are not hosting this dataset: we are just using a script to download their data file and create a dataset from it.\r\n\r\nIt was the dataset owners decision to host their data at Zenodo...
null
4,182
false
Support streaming FLEURS dataset
## Dataset viewer issue for '*name of the dataset*' https://huggingface.co/datasets/google/fleurs ``` Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str...
https://github.com/huggingface/datasets/issues/4181
[ "Yes, you just have to use `dl_manager.iter_archive` instead of `dl_manager.download_and_extract`.\r\n\r\nThat's because `download_and_extract` doesn't support TAR archives in streaming mode.", "Tried to make it streamable, but I don't think it's really possible. @lhoestq @polinaeterna maybe you guys can check: \...
null
4,181
false
Add some iteration method on a dataset column (specific for inference)
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset. Having an iterator (or sequence) type of object, would make inference ...
https://github.com/huggingface/datasets/issues/4180
[ "Thanks for the suggestion ! I agree it would be nice to have something directly in `datasets` to do something as simple as that\r\n\r\ncc @albertvillanova @mariosasko @polinaeterna What do you think if we have something similar to pandas `Series` that wouldn't bring everything in memory when doing `dataset[\"audio...
null
4,180
false
Dataset librispeech_asr fails to load
## Describe the bug The dataset librispeech_asr (standard Librispeech) fails to load. ## Steps to reproduce the bug ```python datasets.load_dataset("librispeech_asr") ``` ## Expected results It should download and prepare the whole dataset (all subsets). In [the doc](https://huggingface.co/datasets/libris...
https://github.com/huggingface/datasets/issues/4179
[ "@patrickvonplaten Hi! I saw that you prepared this? :)", "Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https://www.openslr.org/), it says:\r\n\r\...
null
4,179
false
[feat] Add ImageNet dataset
To use the dataset download the tar file [imagenet_object_localization_patched2019.tar.gz](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=imagenet_object_localization_patched2019.tar.gz) from Kaggle and then point the datasets library to it by using: ```py from datasets impo...
https://github.com/huggingface/datasets/pull/4178
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the comments. I believe I have addressed all of them and also decreased the size of the dummy data file, so it should be ready for a re-review. I also made a change to allow adding synset mapping and valprep script in conf...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4178", "html_url": "https://github.com/huggingface/datasets/pull/4178", "diff_url": "https://github.com/huggingface/datasets/pull/4178.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4178.patch", "merged_at": "2022-04-29T21:37...
4,178
true
Adding missing subsets to the `SemEval-2018 Task 1` dataset
This dataset for the [1st task of SemEval-2018](https://competitions.codalab.org/competitions/17751) competition was missing all subtasks except for subtask 5. I added another two subtasks (subtask 1 and 2), which are each comprised of 12 additional data subsets: for each language in En, Es, Ar, there are 4 datasets, b...
https://github.com/huggingface/datasets/pull/4177
[ "Datasets are not tracked in this repository anymore. You should move this PR to the [discussions page of this dataset](https://huggingface.co/datasets/sem_eval_2018_task_1/discussions)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4177", "html_url": "https://github.com/huggingface/datasets/pull/4177", "diff_url": "https://github.com/huggingface/datasets/pull/4177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4177.patch", "merged_at": null }
4,177
true
Very slow between two operations
Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores. Also, there is a significant lag between them. Am I missing something ? ``` raw_datasets = raw_datasets.map(split_func...
https://github.com/huggingface/datasets/issues/4176
[]
null
4,176
false
Add WIT Dataset
closes #2981 #2810 @nateraw @hassiahk I've listed you guys as co-author as you've contributed previously to this dataset
https://github.com/huggingface/datasets/pull/4175
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi! Coming in late with some context.\r\n\r\nThere are two versions of the WIT dataset:\r\n1. The original source dataset managed by Wikimedia. It has more information, raw image representations, and each row corresponds to an image ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4175", "html_url": "https://github.com/huggingface/datasets/pull/4175", "diff_url": "https://github.com/huggingface/datasets/pull/4175.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4175.patch", "merged_at": null }
4,175
true
Fix when map function modifies input in-place
When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists.
https://github.com/huggingface/datasets/pull/4174
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4174", "html_url": "https://github.com/huggingface/datasets/pull/4174", "diff_url": "https://github.com/huggingface/datasets/pull/4174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4174.patch", "merged_at": "2022-04-15T14:45...
4,174
true
Stream private zipped images
As mentioned in https://github.com/huggingface/datasets/issues/4139 it's currently not possible to stream private/gated zipped images from the Hub. This is because `Image.decode_example` does not handle authentication. Indeed decoding requires to access and download the file from the private repository. In this P...
https://github.com/huggingface/datasets/pull/4173
[ "_The documentation is not available anymore as the PR was closed or merged._", "oops looks like some tests are failing sorry, will fix them tomorrow\r\n\r\nEDIT: not today but asap hopefully", "cc @mariosasko this is ready for review, let me know what you think !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4173", "html_url": "https://github.com/huggingface/datasets/pull/4173", "diff_url": "https://github.com/huggingface/datasets/pull/4173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4173.patch", "merged_at": "2022-05-05T13:58...
4,173
true
Update assin2 dataset_infos.json
Following comments in https://github.com/huggingface/datasets/issues/4003 we found that it was outdated and casing an error when loading the dataset
https://github.com/huggingface/datasets/pull/4172
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4172", "html_url": "https://github.com/huggingface/datasets/pull/4172", "diff_url": "https://github.com/huggingface/datasets/pull/4172.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4172.patch", "merged_at": "2022-04-15T14:41...
4,172
true
to_tf_dataset rewrite
This PR rewrites almost all of `to_tf_dataset()`, which makes it kind of hard to list all the changes, but the most critical ones are: - Much better stability and no more dropping unexpected column names (Sorry @NielsRogge) - Doesn't clobber custom transforms on the data (Sorry @NielsRogge again) - Much better han...
https://github.com/huggingface/datasets/pull/4170
[ "_The documentation is not available anymore as the PR was closed or merged._", "[Magic is now banned](https://www.youtube.com/watch?v=WIn58XoY728#t=36s) by decree of @sgugger. This is honestly much cleaner, and the functionality will make much more sense in `transformers` anyway!", "@gante I renamed the defaul...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4170", "html_url": "https://github.com/huggingface/datasets/pull/4170", "diff_url": "https://github.com/huggingface/datasets/pull/4170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4170.patch", "merged_at": "2022-06-06T14:22...
4,170
true
Timit_asr dataset cannot be previewed recently
## Dataset viewer issue for '*timit_asr*' **Link:** *https://huggingface.co/datasets/timit_asr* Issue: The timit-asr dataset cannot be previewed recently. Am I the one who added this dataset ? Yes-No No
https://github.com/huggingface/datasets/issues/4169
[ "Thanks for reporting. The bug has already been detected, and we hope to fix it soon.", "TIMIT is now a dataset that requires manual download, see #4145 \r\n\r\nTherefore it might take a bit more time to fix it", "> TIMIT is now a dataset that requires manual download, see #4145\r\n> \r\n> Therefore it might ta...
null
4,169
false
Add code examples to API docs
This PR adds code examples for functions related to the base Datasets class to highlight usage. Most of the examples use the `rotten_tomatoes` dataset since it is nice and small. Several things I would appreciate feedback on: - Do you think it is clearer to make every code example fully reproducible so when users co...
https://github.com/huggingface/datasets/pull/4168
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer.\r...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4168", "html_url": "https://github.com/huggingface/datasets/pull/4168", "diff_url": "https://github.com/huggingface/datasets/pull/4168.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4168.patch", "merged_at": "2022-04-27T18:48...
4,168
true
Avoid rate limit in update hub repositories
use http.extraHeader to avoid rate limit
https://github.com/huggingface/datasets/pull/4167
[ "I also set GIT_LFS_SKIP_SMUDGE=1 to speed up git clones", "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4167", "html_url": "https://github.com/huggingface/datasets/pull/4167", "diff_url": "https://github.com/huggingface/datasets/pull/4167.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4167.patch", "merged_at": "2022-04-13T20:50...
4,167
true
Fix exact match
Clarify docs and add clarifying example to the exact_match metric
https://github.com/huggingface/datasets/pull/4166
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4166", "html_url": "https://github.com/huggingface/datasets/pull/4166", "diff_url": "https://github.com/huggingface/datasets/pull/4166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4166.patch", "merged_at": "2022-05-03T12:16...
4,166
true
Fix google bleu typos, examples
null
https://github.com/huggingface/datasets/pull/4165
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4165", "html_url": "https://github.com/huggingface/datasets/pull/4165", "diff_url": "https://github.com/huggingface/datasets/pull/4165.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4165.patch", "merged_at": "2022-05-03T12:16...
4,165
true
Fix duplicate key in multi_news
To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928
https://github.com/huggingface/datasets/pull/4164
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4164", "html_url": "https://github.com/huggingface/datasets/pull/4164", "diff_url": "https://github.com/huggingface/datasets/pull/4164.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4164.patch", "merged_at": "2022-04-13T20:58...
4,164
true
Optional Content Warning for Datasets
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild I'm wondering if there is an option to select a content warning messa...
https://github.com/huggingface/datasets/issues/4163
[ "Hi! You can use the `extra_gated_prompt` YAML field in a dataset card for displaying custom messages/warnings that the user must accept before gaining access to the actual dataset. This option also keeps the viewer hidden until the user agrees to terms. ", "Hi @mariosasko, thanks for explaining how to add this f...
null
4,163
false
Add Conceptual 12M
null
https://github.com/huggingface/datasets/pull/4162
[ "_The documentation is not available anymore as the PR was closed or merged._", "Looks like your dummy_data.zip file is not in the right location ;)\r\ndatasets/datasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip\r\n->\r\ndatasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4162", "html_url": "https://github.com/huggingface/datasets/pull/4162", "diff_url": "https://github.com/huggingface/datasets/pull/4162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4162.patch", "merged_at": "2022-04-15T08:06...
4,162
true
Add Visual Genome
null
https://github.com/huggingface/datasets/pull/4161
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hum there seems to be some issues with tasks in test:\r\n - some tasks don't fit anything in `tasks.json`. Do I remove them in `task_categories`?\r\n - some tasks should exist, typically `visual-question-answering` (https://github.co...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4161", "html_url": "https://github.com/huggingface/datasets/pull/4161", "diff_url": "https://github.com/huggingface/datasets/pull/4161.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4161.patch", "merged_at": "2022-04-21T13:08...
4,161
true
RGBA images not showing
## Dataset viewer issue for ceyda/smithsonian_butterflies_transparent [**Link:** *link to the dataset viewer page*](https://huggingface.co/datasets/ceyda/smithsonian_butterflies_transparent) ![image](https://user-images.githubusercontent.com/15624271/163117683-e91edb28-41bf-43d9-b371-5c62e14f40c9.png) Am I the...
https://github.com/huggingface/datasets/issues/4160
[ "Thanks for reporting. It's a known issue, and we hope to fix it soon.", "Fixed, thanks!" ]
null
4,160
false
Add `TruthfulQA` dataset
null
https://github.com/huggingface/datasets/pull/4159
[ "_The documentation is not available anymore as the PR was closed or merged._", "Bump. (I'm not sure which reviewer to `@` but, previously, @lhoestq has been very helpful 🤗 )" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4159", "html_url": "https://github.com/huggingface/datasets/pull/4159", "diff_url": "https://github.com/huggingface/datasets/pull/4159.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4159.patch", "merged_at": "2022-06-08T14:43...
4,159
true
Add AUC ROC Metric
null
https://github.com/huggingface/datasets/pull/4158
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4158", "html_url": "https://github.com/huggingface/datasets/pull/4158", "diff_url": "https://github.com/huggingface/datasets/pull/4158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4158.patch", "merged_at": "2022-04-26T19:35...
4,158
true
Fix formatting in BLEU metric card
Fix #4148
https://github.com/huggingface/datasets/pull/4157
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4157", "html_url": "https://github.com/huggingface/datasets/pull/4157", "diff_url": "https://github.com/huggingface/datasets/pull/4157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4157.patch", "merged_at": "2022-04-13T14:16...
4,157
true
Adding STSb-TR dataset
Semantic Textual Similarity benchmark Turkish (STSb-TR) dataset introduced in our paper [Semantic Similarity Based Evaluation for Abstractive News Summarization](https://aclanthology.org/2021.gem-1.3.pdf) added.
https://github.com/huggingface/datasets/pull/4156
[ "Thanks for your contribution, @figenfikri.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4156", "html_url": "https://github.com/huggingface/datasets/pull/4156", "diff_url": "https://github.com/huggingface/datasets/pull/4156.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4156.patch", "merged_at": null }
4,156
true