url stringlengths 58 61 | repository_url stringclasses 1 value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 48 51 | id int64 600M 3.09B | node_id stringlengths 18 24 | number int64 2 7.59k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 1 value | locked bool 1 class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at timestamp[ns, tz=UTC]date 2020-04-14 18:18:51 2025-05-27 13:46:05 | updated_at timestamp[ns, tz=UTC]date 2020-04-29 09:23:05 2025-06-09 22:00:16 | closed_at timestamp[ns, tz=UTC]date 2020-04-29 09:23:05 2025-06-06 16:12:36 | author_association stringclasses 4 values | type float64 | active_lock_reason float64 | sub_issues_summary dict | body stringlengths 0 228k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app float64 | state_reason stringclasses 3 values | draft float64 | pull_request null | time_to_close_hours float64 0.01 28.8k | __index_level_0__ int64 18 7.53k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6139/comments | https://api.github.com/repos/huggingface/datasets/issues/6139/events | https://github.com/huggingface/datasets/issues/6139 | 1,844,991,583 | I_kwDODunzps5t-FZf | 6,139 | Offline dataset viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": fals... | closed | false | null | [] | null | [
"Hi, thanks for the suggestion. It's not possible at the moment. The viewer is part of the Hub codebase and only works on public datasets. Also, it relies on [Datasets Server](https://github.com/huggingface/datasets-server/), which prepares the data and provides an API to access the rows, size, etc.\r\n\r\nIf you'r... | 2023-08-10T11:30:00Z | 2024-09-24T18:36:35Z | 2023-09-29T13:10:22Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something that makes it easy to view the dataset.
### Motivation
I want to easily view my dataset even when it is hosted locally.
### Your contribution
N.A. | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6139/timeline | null | completed | null | null | 1,201.672778 | 1,509 |
https://api.github.com/repos/huggingface/datasets/issues/6136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6136/comments | https://api.github.com/repos/huggingface/datasets/issues/6136/events | https://github.com/huggingface/datasets/issues/6136 | 1,844,887,866 | I_kwDODunzps5t9sE6 | 6,136 | CI check_code_quality error: E721 Do not compare types, use `isinstance()` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2023-08-10T10:19:50Z | 2023-08-10T11:22:58Z | 2023-08-10T11:22:58Z | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error:
```
src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()`
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6136/timeline | null | completed | null | null | 1.052222 | 1,512 |
https://api.github.com/repos/huggingface/datasets/issues/6134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6134/comments | https://api.github.com/repos/huggingface/datasets/issues/6134/events | https://github.com/huggingface/datasets/issues/6134 | 1,844,535,142 | I_kwDODunzps5t8V9m | 6,134 | `datasets` cannot be installed alongside `apache-beam` | {
"avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4",
"events_url": "https://api.github.com/users/boyleconnor/events{/privacy}",
"followers_url": "https://api.github.com/users/boyleconnor/followers",
"following_url": "https://api.github.com/users/boyleconnor/following{/other_user}",
"gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/boyleconnor",
"id": 6520892,
"login": "boyleconnor",
"node_id": "MDQ6VXNlcjY1MjA4OTI=",
"organizations_url": "https://api.github.com/users/boyleconnor/orgs",
"received_events_url": "https://api.github.com/users/boyleconnor/received_events",
"repos_url": "https://api.github.com/users/boyleconnor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/boyleconnor",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I noticed that this is actually covered by issue #5613, which for some reason I didn't see when I searched the issues in this repo the first time."
] | 2023-08-10T06:54:32Z | 2023-09-01T03:19:49Z | 2023-08-10T15:22:10Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co/datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to do something such as importing the `load_dataset` method from `datasets` results in a crashing error.
I think the problem is that `apache-beam` version 2.49.0 requires `dill>=0.3.1.1,<0.3.2`, but the latest version of `multiprocess` (0.70.15) (on which `datasets` depends) requires `dill>=0.3.7,`, so this is causing the dependency resolver to use an older version of `multiprocess` which leads to the `datasets` crashing since it doesn't actually appear to be compatible with older versions.
### Steps to reproduce the bug
See this [Google Colab notebook](https://colab.research.google.com/drive/1PTeGlshamFcJZix_GiS3vMXX_YzAhGv0?usp=sharing) to easily reproduce the bug.
In some environments, I have been able to reproduce the bug by running the following in Bash:
```bash
$ pip install datasets apache-beam
```
then the following in a Python shell:
```python
from datasets import load_dataset
```
Here is my stacktrace from running on Google Colab:
<details>
<summary>stacktrace</summary>
```
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
20 __version__ = "2.14.4"
21
---> 22 from .arrow_dataset import Dataset
23 from .arrow_reader import ReadInstruction
24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
64
65 from . import config
---> 66 from .arrow_reader import ArrowReader
67 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
68 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
28 import pyarrow.parquet as pq
29
---> 30 from .download.download_config import DownloadConfig
31 from .naming import _split_re, filenames_for_dataset_split
32 from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables
[/usr/local/lib/python3.10/dist-packages/datasets/download/__init__.py](https://localhost:8080/#) in <module>
7
8 from .download_config import DownloadConfig
----> 9 from .download_manager import DownloadManager, DownloadMode
10 from .streaming_download_manager import StreamingDownloadManager
[/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py](https://localhost:8080/#) in <module>
33 from ..utils.info_utils import get_size_checksum_dict
34 from ..utils.logging import get_logger, is_progress_bar_enabled, tqdm
---> 35 from ..utils.py_utils import NestedDataStructure, map_nested, size_str
36 from .download_config import DownloadConfig
37
[/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <module>
38 import dill
39 import multiprocess
---> 40 import multiprocess.pool
41 import numpy as np
42 from packaging import version
[/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in <module>
607 #
608
--> 609 class ThreadPool(Pool):
610
611 from .dummy import Process
[/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in ThreadPool()
609 class ThreadPool(Pool):
610
--> 611 from .dummy import Process
612
613 def __init__(self, processes=None, initializer=None, initargs=()):
[/usr/local/lib/python3.10/dist-packages/multiprocess/dummy/__init__.py](https://localhost:8080/#) in <module>
85 #
86
---> 87 class Condition(threading._Condition):
88 # XXX
89 if sys.version_info < (3, 0):
AttributeError: module 'threading' has no attribute '_Condition'
```
</details>
I've also found that attempting to install these `datasets` and `apache-beam` in certain environments (e.g. via pip inside a conda env) simply causes pip to hang indefinitely.
### Expected behavior
I would expect to be able to import methods from `datasets` without crashing. I have tested that this is possible as long as I do not attempt to install `apache-beam`.
### Environment info
Google Colab | {
"avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4",
"events_url": "https://api.github.com/users/boyleconnor/events{/privacy}",
"followers_url": "https://api.github.com/users/boyleconnor/followers",
"following_url": "https://api.github.com/users/boyleconnor/following{/other_user}",
"gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/boyleconnor",
"id": 6520892,
"login": "boyleconnor",
"node_id": "MDQ6VXNlcjY1MjA4OTI=",
"organizations_url": "https://api.github.com/users/boyleconnor/orgs",
"received_events_url": "https://api.github.com/users/boyleconnor/received_events",
"repos_url": "https://api.github.com/users/boyleconnor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/boyleconnor",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6134/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6134/timeline | null | not_planned | null | null | 8.460556 | 1,514 |
https://api.github.com/repos/huggingface/datasets/issues/6132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6132/comments | https://api.github.com/repos/huggingface/datasets/issues/6132/events | https://github.com/huggingface/datasets/issues/6132 | 1,843,491,020 | I_kwDODunzps5t4XDM | 6,132 | to_iterable_dataset is missing in document | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Fixed with PR"
] | 2023-08-09T15:15:03Z | 2023-08-16T04:43:36Z | 2023-08-16T04:43:29Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
to_iterable_dataset is missing in document
### Steps to reproduce the bug
to_iterable_dataset is missing in document
### Expected behavior
document enhancement
### Environment info
unrelated | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6132/timeline | null | completed | null | null | 157.473889 | 1,516 |
https://api.github.com/repos/huggingface/datasets/issues/6130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6130/comments | https://api.github.com/repos/huggingface/datasets/issues/6130/events | https://github.com/huggingface/datasets/issues/6130 | 1,843,158,846 | I_kwDODunzps5t3F8- | 6,130 | default config name doesn't work when config kwargs are specified. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"@lhoestq ",
"What should be the behavior in this case ? Should it override the default config with the added parameter ?",
"I know why it should be treated as a new config if overriding parameters are passed. But in some case, I just pass in some common fields like `data_dir`.\r\n\r\nFor example, I want to ext... | 2023-08-09T12:43:15Z | 2023-11-22T11:50:49Z | 2023-11-22T11:50:48Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522
If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset cannot select the default config from multiple configs.
### Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('/dataset/with/multiple/config'') # Ok
datasets.load_dataset('/dataset/with/multiple/config', some_field_in_config='some') # Err
```
### Expected behavior
Default config behavior should be consistent.
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6130/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6130/timeline | null | completed | null | null | 2,519.125833 | 1,517 |
https://api.github.com/repos/huggingface/datasets/issues/6128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6128/comments | https://api.github.com/repos/huggingface/datasets/issues/6128/events | https://github.com/huggingface/datasets/issues/6128 | 1,841,545,493 | I_kwDODunzps5tw8EV | 6,128 | IndexError: Invalid key: 88 is out of bounds for size 0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4",
"events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}",
"followers_url": "https://api.github.com/users/TomasAndersonFang/followers",
"following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}",
"gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TomasAndersonFang",
"id": 38727343,
"login": "TomasAndersonFang",
"node_id": "MDQ6VXNlcjM4NzI3MzQz",
"organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs",
"received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events",
"repos_url": "https://api.github.com/users/TomasAndersonFang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TomasAndersonFang",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi @TomasAndersonFang,\r\n\r\nHave you tried instead to use `torch_compile` in `transformers.TrainingArguments`? https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.torch_compile",
"> \r\n\r\nI tried this and got the following error:\r\n\r\n```\r\nTraceback (mo... | 2023-08-08T15:32:08Z | 2023-12-26T07:51:57Z | 2023-08-11T13:35:09Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib.
### Steps to reproduce the bug
I use the following code to fine-tune Falcon on my private dataset.
```python
import transformers
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
AutoConfig,
DataCollatorForSeq2Seq,
Trainer,
Seq2SeqTrainer,
HfArgumentParser,
Seq2SeqTrainingArguments,
BitsAndBytesConfig,
)
from peft import (
LoraConfig,
get_peft_model,
get_peft_model_state_dict,
prepare_model_for_int8_training,
set_peft_model_state_dict,
)
import torch
import os
import evaluate
import functools
from datasets import load_dataset
import bitsandbytes as bnb
import logging
import json
import copy
from typing import Dict, Optional, Sequence
from dataclasses import dataclass, field
# Lora settings
LORA_R = 8
LORA_ALPHA = 16
LORA_DROPOUT= 0.05
LORA_TARGET_MODULES = ["query_key_value"]
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="Salesforce/codegen2-7B")
@dataclass
class DataArguments:
data_path: str = field(default=None, metadata={"help": "Path to the training data."})
train_file: str = field(default=None, metadata={"help": "Path to the evaluation data."})
eval_file: str = field(default=None, metadata={"help": "Path to the evaluation data."})
cache_path: str = field(default=None, metadata={"help": "Path to the cache directory."})
num_proc: int = field(default=4, metadata={"help": "Number of processes to use for data preprocessing."})
@dataclass
class TrainingArguments(transformers.TrainingArguments):
# cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
model_max_length: int = field(
default=512,
metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."},
)
is_lora: bool = field(default=True, metadata={"help": "Whether to use LORA."})
def tokenize(text, tokenizer, max_seq_len=512, add_eos_token=True):
result = tokenizer(
text,
truncation=True,
max_length=max_seq_len,
padding=False,
return_tensors=None,
)
if (
result["input_ids"][-1] != tokenizer.eos_token_id
and len(result["input_ids"]) < max_seq_len
and add_eos_token
):
result["input_ids"].append(tokenizer.eos_token_id)
result["attention_mask"].append(1)
if add_eos_token and len(result["input_ids"]) >= max_seq_len:
result["input_ids"][max_seq_len - 1] = tokenizer.eos_token_id
result["attention_mask"][max_seq_len - 1] = 1
result["labels"] = result["input_ids"].copy()
return result
def main():
parser = HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
config = AutoConfig.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
trust_remote_code=True,
)
if training_args.is_lora:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
torch_dtype=torch.float16,
trust_remote_code=True,
load_in_8bit=True,
quantization_config=BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0
),
)
model = prepare_model_for_int8_training(model)
config = LoraConfig(
r=LORA_R,
lora_alpha=LORA_ALPHA,
target_modules=LORA_TARGET_MODULES,
lora_dropout=LORA_DROPOUT,
bias="none",
task_type="CAUSAL_LM",
)
model = get_peft_model(model, config)
else:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
torch_dtype=torch.float16,
cache_dir=data_args.cache_path,
trust_remote_code=True,
)
model.config.use_cache = False
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
print_trainable_parameters(model)
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
model_max_length=training_args.model_max_length,
padding_side="left",
use_fast=True,
trust_remote_code=True,
)
tokenizer.pad_token = tokenizer.eos_token
# Load dataset
def generate_and_tokenize_prompt(sample):
input_text = sample["input"]
target_text = sample["output"] + tokenizer.eos_token
full_text = input_text + target_text
tokenized_full_text = tokenize(full_text, tokenizer, max_seq_len=512)
tokenized_input_text = tokenize(input_text, tokenizer, max_seq_len=512)
input_len = len(tokenized_input_text["input_ids"]) - 1 # -1 for eos token
tokenized_full_text["labels"] = [-100] * input_len + tokenized_full_text["labels"][input_len:]
return tokenized_full_text
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
if data_args.eval_file is not None:
data_files["eval"] = data_args.eval_file
dataset = load_dataset(data_args.data_path, data_files=data_files)
train_dataset = dataset["train"]
eval_dataset = dataset["eval"]
train_dataset = train_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc)
eval_dataset = eval_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc)
data_collator = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True)
# Evaluation metrics
def compute_metrics(eval_preds, tokenizer):
metric = evaluate.load('exact_match')
preds, labels = eval_preds
# In case the model returns more than the prediction logits
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=False)
# Replace -100s in the labels as we can't decode them
labels[labels == -100] = tokenizer.pad_token_id
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True, clean_up_tokenization_spaces=False)
# Some simple post-processing
decoded_preds = [pred.strip() for pred in decoded_preds]
decoded_labels = [label.strip() for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
return {'exact_match': result['exact_match']}
compute_metrics_fn = functools.partial(compute_metrics, tokenizer=tokenizer)
model = torch.compile(model)
# Training
trainer = Trainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
args=training_args,
data_collator=data_collator,
compute_metrics=compute_metrics_fn,
)
trainer.train()
trainer.save_state()
trainer.save_model(output_dir=training_args.output_dir)
tokenizer.save_pretrained(save_directory=training_args.output_dir)
if __name__ == "__main__":
main()
```
When I didn't use `torch.cpmpile(model)`, my code worked well. But when I added this line to my code, It produced the following error:
```
Traceback (most recent call last):
File "falcon_sft.py", line 230, in <module>
main()
File "falcon_sft.py", line 223, in main
trainer.train()
File "python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "python3.10/site-packages/transformers/trainer.py", line 1787, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "python3.10/site-packages/accelerate/data_loader.py", line 384, in __iter__
current_batch = next(dataloader_iter)
File "python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "python3.10/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
File "python3.10/site-packages/datasets/arrow_dataset.py", line 2807, in __getitems__
batch = self.__getitem__(keys)
File "python3.10/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "python3.10/site-packages/datasets/arrow_dataset.py", line 2787, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "python3.10/site-packages/datasets/formatting/formatting.py", line 583, in query_table
_check_valid_index_key(key, size)
File "python3.10/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key
_check_valid_index_key(int(max(key)), size=size)
File "python3.10/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 88 is out of bounds for size 0
```
So I'm confused about why this error was generated, and how to fix it. Is this error produced by datasets or `torch.compile`?
### Expected behavior
I want to use `torch.compile` in my code.
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4",
"events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}",
"followers_url": "https://api.github.com/users/TomasAndersonFang/followers",
"following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}",
"gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TomasAndersonFang",
"id": 38727343,
"login": "TomasAndersonFang",
"node_id": "MDQ6VXNlcjM4NzI3MzQz",
"organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs",
"received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events",
"repos_url": "https://api.github.com/users/TomasAndersonFang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TomasAndersonFang",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6128/timeline | null | completed | null | null | 70.050278 | 1,519 |
https://api.github.com/repos/huggingface/datasets/issues/6126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6126/comments | https://api.github.com/repos/huggingface/datasets/issues/6126/events | https://github.com/huggingface/datasets/issues/6126 | 1,839,675,320 | I_kwDODunzps5tpze4 | 6,126 | Private datasets do not load when passing token | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Our CI did not catch this issue because with current implementation, stored token in `HfFolder` (which always exists) is used by default.",
"I can confirm this and have the same problem (and just went almost crazy because I couldn't figure out the source of this problem because on another computer everything wor... | 2023-08-07T15:06:47Z | 2023-08-08T15:16:23Z | 2023-08-08T15:16:23Z | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`.
This is a non-planned backward incompatible breaking change.
Note that private datasets do load if instead `download_config` is passed:
```python
from datasets import DownloadConfig, load_dataset
ds = load_dataset("albertvillanova/tmp-private", split="train", download_config=DownloadConfig(token="<MY-TOKEN>"))
ds
```
gives
```
Dataset({
features: ['text'],
num_rows: 4
})
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>")
```
gives
```
---------------------------------------------------------------------------
EmptyDatasetError Traceback (most recent call last)
[<ipython-input-2-25b48732107a>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>")
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2107
2108 # Create a dataset builder
-> 2109 builder_instance = load_dataset_builder(
2110 path=path,
2111 name=name,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs)
1793 download_config = download_config.copy() if download_config else DownloadConfig()
1794 download_config.storage_options.update(storage_options)
-> 1795 dataset_module = dataset_module_factory(
1796 path,
1797 revision=revision,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1484 raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
1485 if isinstance(e1, EmptyDatasetError):
-> 1486 raise e1 from None
1487 if isinstance(e1, FileNotFoundError):
1488 raise FileNotFoundError(
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1474 download_config=download_config,
1475 download_mode=download_mode,
-> 1476 ).get_module()
1477 except (
1478 Exception
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self)
1030 sanitize_patterns(self.data_files)
1031 if self.data_files is not None
-> 1032 else get_data_patterns(base_path, download_config=self.download_config)
1033 )
1034 data_files = DataFilesDict.from_patterns(
[/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in get_data_patterns(base_path, download_config)
457 return _get_data_files_patterns(resolver)
458 except FileNotFoundError:
--> 459 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
460
461
EmptyDatasetError: The directory at hf://datasets/albertvillanova/tmp-private@79b9e4fe79670a9a050d6ebc385464891915a71d doesn't contain any data files
```
### Expected behavior
The dataset should load.
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6126/timeline | null | completed | null | null | 24.16 | 1,521 |
https://api.github.com/repos/huggingface/datasets/issues/6125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6125/comments | https://api.github.com/repos/huggingface/datasets/issues/6125/events | https://github.com/huggingface/datasets/issues/6125 | 1,837,980,986 | I_kwDODunzps5tjV06 | 6,125 | Reinforcement Learning and Robotics are not task categories in HF datasets metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/35373228?v=4",
"events_url": "https://api.github.com/users/StoneT2000/events{/privacy}",
"followers_url": "https://api.github.com/users/StoneT2000/followers",
"following_url": "https://api.github.com/users/StoneT2000/following{/other_user}",
"gists_url": "https://api.github.com/users/StoneT2000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/StoneT2000",
"id": 35373228,
"login": "StoneT2000",
"node_id": "MDQ6VXNlcjM1MzczMjI4",
"organizations_url": "https://api.github.com/users/StoneT2000/orgs",
"received_events_url": "https://api.github.com/users/StoneT2000/received_events",
"repos_url": "https://api.github.com/users/StoneT2000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/StoneT2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StoneT2000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/StoneT2000",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2023-08-05T23:59:42Z | 2023-08-18T12:28:42Z | 2023-08-18T12:28:42Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
In https://huggingface.co/models there are task categories for RL and robotics but none in https://huggingface.co/datasets
Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags
Moreover we see some older datasets that do have that tag, but we can't seem to add it ourselves.
### Steps to reproduce the bug
1. Create a new dataset on Hugging face
2. Try to type reinforcemement-learning or robotics into the tasks categories, it does not allow you to commit
### Expected behavior
Expected to be able to add RL and robotics as task categories as some previous datasets have these tags
### Environment info
N/A | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6125/timeline | null | completed | null | null | 300.483333 | 1,522 |
https://api.github.com/repos/huggingface/datasets/issues/6124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6124/comments | https://api.github.com/repos/huggingface/datasets/issues/6124/events | https://github.com/huggingface/datasets/issues/6124 | 1,837,868,112 | I_kwDODunzps5ti6RQ | 6,124 | Datasets crashing runs due to KeyError | {
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/conceptofmind",
"id": 25208228,
"login": "conceptofmind",
"node_id": "MDQ6VXNlcjI1MjA4MjI4",
"organizations_url": "https://api.github.com/users/conceptofmind/orgs",
"received_events_url": "https://api.github.com/users/conceptofmind/received_events",
"repos_url": "https://api.github.com/users/conceptofmind/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions",
"type": "User",
"url": "https://api.github.com/users/conceptofmind",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"i once had the same error and I could fix that by pushing a fake or a dummy commit on my hugging face dataset repo",
"Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?",
"> Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?\... | 2023-08-05T17:48:56Z | 2023-11-30T16:28:57Z | 2023-11-30T16:28:57Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hi all,
I have been running into a pretty persistent issue recently when trying to load datasets.
```python
train_dataset = load_dataset(
'llama-2-7b-tokenized',
split = 'train'
)
```
I receive a KeyError which crashes the runs.
```
Traceback (most recent call last):
main()
train_dataset = load_dataset(
^^^^^^^^^^^^^
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
raise e1 from None
).get_module()
^^^^^^^^^^^^
else get_data_patterns(base_path, download_config=self.download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
return _get_data_files_patterns(resolver)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
data_files = pattern_resolver(pattern)
^^^^^^^^^^^^^^^^^^^^^^^^^
fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paths = [f for f in sorted(fs.glob(paths)) if not fs.isdir(f)]
^^^^^^^^^^^^^^
allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs):
listing = self.ls(path, detail=True, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"last_modified": parse_datetime(tree_item["lastCommit"]["date"]),
~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'lastCommit'
```
Any help would be greatly appreciated.
Thank you,
Enrico
### Steps to reproduce the bug
Load the dataset from the Huggingface hub.
```python
train_dataset = load_dataset(
'llama-2-7b-tokenized',
split = 'train'
)
```
### Expected behavior
Loads the dataset.
### Environment info
datasets-2.14.3
CUDA 11.8
Python 3.11 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6124/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6124/timeline | null | completed | null | null | 2,806.666944 | 1,523 |
https://api.github.com/repos/huggingface/datasets/issues/6123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6123/comments | https://api.github.com/repos/huggingface/datasets/issues/6123/events | https://github.com/huggingface/datasets/issues/6123 | 1,837,789,294 | I_kwDODunzps5tinBu | 6,123 | Inaccurate Bounding Boxes in "wildreceipt" Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/50714796?v=4",
"events_url": "https://api.github.com/users/HamzaGbada/events{/privacy}",
"followers_url": "https://api.github.com/users/HamzaGbada/followers",
"following_url": "https://api.github.com/users/HamzaGbada/following{/other_user}",
"gists_url": "https://api.github.com/users/HamzaGbada/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HamzaGbada",
"id": 50714796,
"login": "HamzaGbada",
"node_id": "MDQ6VXNlcjUwNzE0Nzk2",
"organizations_url": "https://api.github.com/users/HamzaGbada/orgs",
"received_events_url": "https://api.github.com/users/HamzaGbada/received_events",
"repos_url": "https://api.github.com/users/HamzaGbada/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HamzaGbada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamzaGbada/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HamzaGbada",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! Thanks for the investigation, but we are not the authors of these datasets, so please report this on the Hub instead so that the actual authors can fix it."
] | 2023-08-05T14:34:13Z | 2023-08-17T14:25:27Z | 2023-08-17T14:25:26Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, namely `load_dataset("Theivaprakasham/wildreceipt")` and `load_dataset("jinhybr/WildReceipt")`, and the actual labels and corresponding bounding boxes present in the dataset.
To illustrate this divergence, I've provided two examples in the form of screenshots. These screenshots highlight the contrasting outcomes between my personal implementation of the dataloader and the implementation offered by Hugging Face:
**Example 1:**



**Example 2:**



It's important to note that my dataloader implementation is based on the same dataset files as utilized in the Hugging Face implementation. For your reference, you can access the dataset files through this link: [wildreceipt dataset files](https://download.openmmlab.com/mmocr/data/wildreceipt.tar).
This inconsistency in bounding box accuracy warrants investigation and rectification for maintaining the integrity of the "wildreceipt" dataset. Your attention and assistance in addressing this matter would be greatly appreciated.
### Steps to reproduce the bug
```python
import matplotlib.pyplot as plt
from datasets import load_dataset
# Define functions to convert bounding box formats
def convert_format1(box):
x, y, w, h = box
x2, y2 = x + w, y + h
return [x, y, x2, y2]
def convert_format2(box):
x1, y1, x2, y2 = box
return [x1, y1, x2, y2]
def plot_cropped_image(image, box, title):
cropped_image = image.crop(box)
plt.imshow(cropped_image)
plt.title(title)
plt.axis('off')
plt.savefig(title+'.png')
plt.show()
doc_index = 1
word_index = 3
dataset = load_dataset("Theivaprakasham/wildreceipt")['train']
bbox_hugging_face = dataset[doc_index]['bboxes'][word_index]
text_unit_face = dataset[doc_index]['words'][word_index]
common_box_hugface_1 = convert_format1(bbox_hugging_face)
common_box_hugface_2 = convert_format2(bbox_hugging_face)
plot_cropped_image(image_hugging, common_box_hugface_1,
f'Hugging Face Bouding boxes (x,y,w,h format) \n its associated text unit: {text_unit_face}')
plot_cropped_image(image_hugging, common_box_hugface_2,
f'Hugging Face Bouding boxes (x1,y1,x2, y2 format) \n its associated text unit: {text_unit_face}')
```
### Expected behavior
The bounding boxes generated by the "wildreceipt" dataset in HuggingFace implementation loading commands should accurately match the actual labels and bounding boxes of the dataset.
### Environment info
- Python version: 3.8
- Hugging Face datasets version: 2.14.2
- Dataset file taken from this link: https://download.openmmlab.com/mmocr/data/wildreceipt.tar | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6123/timeline | null | completed | null | null | 287.853611 | 1,524 |
https://api.github.com/repos/huggingface/datasets/issues/6122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6122/comments | https://api.github.com/repos/huggingface/datasets/issues/6122/events | https://github.com/huggingface/datasets/issues/6122 | 1,837,335,721 | I_kwDODunzps5tg4Sp | 6,122 | Upload README via `push_to_hub` | {
"avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4",
"events_url": "https://api.github.com/users/liyucheng09/events{/privacy}",
"followers_url": "https://api.github.com/users/liyucheng09/followers",
"following_url": "https://api.github.com/users/liyucheng09/following{/other_user}",
"gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liyucheng09",
"id": 27999909,
"login": "liyucheng09",
"node_id": "MDQ6VXNlcjI3OTk5OTA5",
"organizations_url": "https://api.github.com/users/liyucheng09/orgs",
"received_events_url": "https://api.github.com/users/liyucheng09/received_events",
"repos_url": "https://api.github.com/users/liyucheng09/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liyucheng09",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"You can use `huggingface_hub`'s [Card API](https://huggingface.co/docs/huggingface_hub/package_reference/cards) to programmatically push a dataset card to the Hub."
] | 2023-08-04T21:00:27Z | 2023-08-21T18:18:54Z | 2023-08-21T18:18:54Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
`push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually.
However, I do discover snippets to intialize a README for every `push_to_hub`:
```
dataset_card = (
DatasetCard(
"---\n"
+ str(dataset_card_data)
+ "\n---\n"
+ f'# Dataset Card for "{repo_id.split("/")[-1]}"\n\n[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)'
)
if dataset_card is None
else dataset_card
)
HfApi(endpoint=config.HF_ENDPOINT).upload_file(
path_or_fileobj=str(dataset_card).encode(),
path_in_repo="README.md",
repo_id=repo_id,
token=token,
repo_type="dataset",
revision=branch,
)
```
So, if we can enable `push_to_hub` to upload a readme file by ourselves instead of using the auto generated ones, it can save ton of time, and will definitely alleviate the current "lack-of-dataset-card" situation.
### Motivation
as elabrated above.
### Your contribution
I might be able to make a pr. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6122/timeline | null | not_planned | null | null | 405.3075 | 1,525 |
https://api.github.com/repos/huggingface/datasets/issues/6116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6116/comments | https://api.github.com/repos/huggingface/datasets/issues/6116/events | https://github.com/huggingface/datasets/issues/6116 | 1,835,098,484 | I_kwDODunzps5tYWF0 | 6,116 | [Docs] The "Process" how-to guide lacks description of `select_columns` function | {
"avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4",
"events_url": "https://api.github.com/users/unifyh/events{/privacy}",
"followers_url": "https://api.github.com/users/unifyh/followers",
"following_url": "https://api.github.com/users/unifyh/following{/other_user}",
"gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/unifyh",
"id": 18213435,
"login": "unifyh",
"node_id": "MDQ6VXNlcjE4MjEzNDM1",
"organizations_url": "https://api.github.com/users/unifyh/orgs",
"received_events_url": "https://api.github.com/users/unifyh/received_events",
"repos_url": "https://api.github.com/users/unifyh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unifyh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/unifyh",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Great idea, feel free to open a PR! :)"
] | 2023-08-03T13:45:10Z | 2023-08-16T10:02:53Z | 2023-08-16T10:02:53Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide.
### Motivation
This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480.
Mentioning it in the guide would help future users discover this added feature.
### Your contribution
I could submit a PR to add a brief description of the function to said guide. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6116/timeline | null | completed | null | null | 308.295278 | 1,531 |
https://api.github.com/repos/huggingface/datasets/issues/6114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6114/comments | https://api.github.com/repos/huggingface/datasets/issues/6114/events | https://github.com/huggingface/datasets/issues/6114 | 1,834,015,584 | I_kwDODunzps5tUNtg | 6,114 | Cache not being used when loading commonvoice 8.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4",
"events_url": "https://api.github.com/users/clabornd/events{/privacy}",
"followers_url": "https://api.github.com/users/clabornd/followers",
"following_url": "https://api.github.com/users/clabornd/following{/other_user}",
"gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clabornd",
"id": 31082141,
"login": "clabornd",
"node_id": "MDQ6VXNlcjMxMDgyMTQx",
"organizations_url": "https://api.github.com/users/clabornd/orgs",
"received_events_url": "https://api.github.com/users/clabornd/received_events",
"repos_url": "https://api.github.com/users/clabornd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clabornd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clabornd",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"You can avoid this by using the `revision` parameter in `load_dataset` to always force downloading a specific commit (if not specified it defaults to HEAD, hence the redownload).",
"Thanks @mariosasko this works well, looks like I should have read the documentation a bit more carefully. \r\n\r\nIt is still a bi... | 2023-08-02T23:18:11Z | 2023-08-18T23:59:00Z | 2023-08-18T23:59:00Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 instance I'm working on. Now, with the same command that downloaded it initially:
```
dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")
```
it tries to redownload the dataset to `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/05bdc7940b0a336ceeaeef13470c89522c29a8e4494cbeece64fb472a87acb32`
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")```
2. dataset is updated by maintainers
3. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")```
### Expected behavior
I expect that it uses the already downloaded data in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`.
Not sure what's happening in 2. but if, say it's an issue with the dataset referenced by "mozilla-foundation/common_voice_8_0" being modified by the maintainers, how would I force datasets to point to the original version I downloaded?
EDIT: It was indeed that the maintainers had updated the dataset (v 8.0.0). However I still cant load the dataset from disk instead of redownloading, with for example:
```
load_dataset(".cache/huggingface/datasets/downloads/extracted/<hash>/cv-corpus-8.0-2022-01-19/en/", "en")
> ...
> File [~/miniconda3/envs/aa_torch2/lib/python3.10/site-packages/datasets/table.py:1938](.../ python3.10/site-packages/datasets/table.py:1938), in cast_array_to_feature(array, feature, allow_number_to_str)
1937 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1938 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
...
1794 e = e.__context__
-> 1795 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1797 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
datasets==2.7.0
python==3.10.8
OS: AWS Linux | {
"avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4",
"events_url": "https://api.github.com/users/clabornd/events{/privacy}",
"followers_url": "https://api.github.com/users/clabornd/followers",
"following_url": "https://api.github.com/users/clabornd/following{/other_user}",
"gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clabornd",
"id": 31082141,
"login": "clabornd",
"node_id": "MDQ6VXNlcjMxMDgyMTQx",
"organizations_url": "https://api.github.com/users/clabornd/orgs",
"received_events_url": "https://api.github.com/users/clabornd/received_events",
"repos_url": "https://api.github.com/users/clabornd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clabornd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clabornd",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6114/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6114/timeline | null | completed | null | null | 384.680278 | 1,533 |
https://api.github.com/repos/huggingface/datasets/issues/6113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6113/comments | https://api.github.com/repos/huggingface/datasets/issues/6113/events | https://github.com/huggingface/datasets/issues/6113 | 1,833,854,030 | I_kwDODunzps5tTmRO | 6,113 | load_dataset() fails with streamlit caching inside docker | {
"avatar_url": "https://avatars.githubusercontent.com/u/987574?v=4",
"events_url": "https://api.github.com/users/fierval/events{/privacy}",
"followers_url": "https://api.github.com/users/fierval/followers",
"following_url": "https://api.github.com/users/fierval/following{/other_user}",
"gists_url": "https://api.github.com/users/fierval/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fierval",
"id": 987574,
"login": "fierval",
"node_id": "MDQ6VXNlcjk4NzU3NA==",
"organizations_url": "https://api.github.com/users/fierval/orgs",
"received_events_url": "https://api.github.com/users/fierval/received_events",
"repos_url": "https://api.github.com/users/fierval/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fierval/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fierval/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fierval",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! This should be fixed in the latest (patch) release (run `pip install -U datasets` to install it). This behavior was due to a bug in our authentication logic."
] | 2023-08-02T20:20:26Z | 2023-08-21T18:18:27Z | 2023-08-21T18:18:27Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message:
EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files
Traceback:
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/user/app/app.py", line 62, in <module>
dashboard()
File "/home/user/app/app.py", line 47, in dashboard
feat_dict, path_gml = load_data(hf_repo, model_gml_dict[selected_model], hf_token)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 211, in wrapper
return cached_func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in __call__
return self._get_or_create_cached_value(args, kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 266, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 320, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "/home/user/app/hf_interface.py", line 16, in load_data
hf_dataset = load_dataset(repo_id, use_auth_token=hf_token)
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2109, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1795, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1486, in dataset_module_factory
raise e1 from None
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1476, in dataset_module_factory
).get_module()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1032, in get_module
else get_data_patterns(base_path, download_config=self.download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 458, in get_data_patterns
raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
### Steps to reproduce the bug
```python
@st.cache_resource
def load_data(repo_id: str, hf_token=None):
"""Load data from HuggingFace Hub
"""
hf_dataset = load_dataset(repo_id, use_auth_token=hf_token)
hf_dataset = hf_dataset.map(lambda x: json.loads(x["ground_truth"]), remove_columns=["ground_truth"])
return hf_dataset
```
### Expected behavior
Expect to load.
Note: works fine with datasets==2.13.1
### Environment info
datasets==2.14.2,
Ubuntu bionic-based Docker container. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6113/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6113/timeline | null | completed | null | null | 453.966944 | 1,534 |
https://api.github.com/repos/huggingface/datasets/issues/6112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6112/comments | https://api.github.com/repos/huggingface/datasets/issues/6112/events | https://github.com/huggingface/datasets/issues/6112 | 1,833,693,299 | I_kwDODunzps5tS_Bz | 6,112 | yaml error using push_to_hub with generated README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/1643887?v=4",
"events_url": "https://api.github.com/users/kevintee/events{/privacy}",
"followers_url": "https://api.github.com/users/kevintee/followers",
"following_url": "https://api.github.com/users/kevintee/following{/other_user}",
"gists_url": "https://api.github.com/users/kevintee/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kevintee",
"id": 1643887,
"login": "kevintee",
"node_id": "MDQ6VXNlcjE2NDM4ODc=",
"organizations_url": "https://api.github.com/users/kevintee/orgs",
"received_events_url": "https://api.github.com/users/kevintee/received_events",
"repos_url": "https://api.github.com/users/kevintee/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kevintee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevintee/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kevintee",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [
"Thanks for reporting! This is a bug in converting the `ArrayXD` types to YAML. It will be fixed soon."
] | 2023-08-02T18:21:21Z | 2023-12-12T15:00:44Z | 2023-12-12T15:00:44Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When I construct a dataset with the following features:
```
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"tokens": Sequence(Value(dtype="string")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
}
)
```
and run `push_to_hub`, the individual `*.parquet` files are pushed, but when trying to upload the auto-generated README, I run into the following error:
```
Traceback (most recent call last):
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status
response.raise_for_status()
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/looppayments/multitask_document_classification_dataset/commit/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 297, in <module>
build_dataset()
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 290, in build_dataset
push_to_hub(dataset, "multitask_document_classification_dataset")
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 135, in push_to_hub
dataset.push_to_hub(f"looppayments/{dataset_name}", private=True)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5577, in push_to_hub
HfApi(endpoint=config.HF_ENDPOINT).upload_file(
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file
commit_info = self.create_commit(
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2728, in create_commit
hf_raise_for_status(commit_resp, endpoint_name="commit")
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64ca9c3d-2d2bbef354e102482a9a168e;bc00371c-8549-4859-9f41-43ff140ad36e)
Bad request for commit endpoint:
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> (10:9)
7 | - 3
8 | - 224
9 | - 224
10 | dtype: float64
--------------^
11 | - name: input_ids
12 | sequence: int64
```
My guess is that the auto-generated yaml is unable to be parsed for some reason.
### Steps to reproduce the bug
The description contains most of what's needed to reproduce the issue, but I've added a shortened code snippet:
```
from datasets import Array2D, Array3D, ClassLabel, Dataset, Features, Sequence, Value
from PIL import Image
from transformers import AutoProcessor
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"tokens": Sequence(Value(dtype="string")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
}
)
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
def preprocess_dataset(rows):
# Get images
images = [
Image.open(png_filename).convert("RGB") for png_filename in rows["png_filename"]
]
encoding = processor(
images,
rows["tokens"],
boxes=rows["bbox"],
truncation=True,
padding="max_length",
)
encoding["tokens"] = rows["tokens"]
return encoding
dataset = dataset.map(
preprocess_dataset,
batched=True,
batch_size=5,
features=features,
)
```
### Expected behavior
Using datasets==2.11.0, I'm able to succesfully push_to_hub, no issues, but with datasets==2.14.2, I run into the above error.
### Environment info
- `datasets` version: 2.14.2
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6112/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6112/timeline | null | completed | null | null | 3,164.656389 | 1,535 |
https://api.github.com/repos/huggingface/datasets/issues/6111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6111/comments | https://api.github.com/repos/huggingface/datasets/issues/6111/events | https://github.com/huggingface/datasets/issues/6111 | 1,832,781,654 | I_kwDODunzps5tPgdW | 6,111 | raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." ) | {
"avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4",
"events_url": "https://api.github.com/users/2catycm/events{/privacy}",
"followers_url": "https://api.github.com/users/2catycm/followers",
"following_url": "https://api.github.com/users/2catycm/following{/other_user}",
"gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/2catycm",
"id": 41530341,
"login": "2catycm",
"node_id": "MDQ6VXNlcjQxNTMwMzQx",
"organizations_url": "https://api.github.com/users/2catycm/orgs",
"received_events_url": "https://api.github.com/users/2catycm/received_events",
"repos_url": "https://api.github.com/users/2catycm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2catycm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/2catycm",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"any idea?",
"This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n\r\n`load_from_disk` is intended to be used on directories created with `Dataset.save_to_disk` or `DatasetDict.save_to_disk`",
"> This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n> \r\n> `load_from_disk` is intended t... | 2023-08-02T09:17:29Z | 2023-08-29T02:00:28Z | 2023-08-29T02:00:28Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object.
However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects.
### Steps to reproduce the bug
Steps to reproduce the bug:
1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main
2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box:
```bash
cd my_directory_absolute
git lfs install
git clone https://huggingface.co/datasets/cifar100
ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK.
```
3. Write A python file to try to load the dataset
```python
from datasets import load_dataset, load_from_disk
dataset = load_from_disk("my_directory_absolute/cifar100")
```
Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead.
4. Then you will see the error reported:
```log
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[5], line 9
1 from datasets import load_dataset, load_from_disk
----> 9 dataset = load_from_disk("my_directory_absolute/cifar100")
File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options)
2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
2231 else:
-> 2232 raise FileNotFoundError(
2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory."
2234 )
FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory.
```
### Expected behavior
The dataset should be load successfully.
### Environment info
```bash
datasets-cli env
```
-> results:
```txt
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.14.2
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4",
"events_url": "https://api.github.com/users/2catycm/events{/privacy}",
"followers_url": "https://api.github.com/users/2catycm/followers",
"following_url": "https://api.github.com/users/2catycm/following{/other_user}",
"gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/2catycm",
"id": 41530341,
"login": "2catycm",
"node_id": "MDQ6VXNlcjQxNTMwMzQx",
"organizations_url": "https://api.github.com/users/2catycm/orgs",
"received_events_url": "https://api.github.com/users/2catycm/received_events",
"repos_url": "https://api.github.com/users/2catycm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2catycm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/2catycm",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6111/timeline | null | completed | null | null | 640.716389 | 1,536 |
https://api.github.com/repos/huggingface/datasets/issues/6110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6110/comments | https://api.github.com/repos/huggingface/datasets/issues/6110/events | https://github.com/huggingface/datasets/issues/6110 | 1,831,110,633 | I_kwDODunzps5tJIfp | 6,110 | [BUG] Dataset initialized from in-memory data does not create cache. | {
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"events_url": "https://api.github.com/users/MattYoon/events{/privacy}",
"followers_url": "https://api.github.com/users/MattYoon/followers",
"following_url": "https://api.github.com/users/MattYoon/following{/other_user}",
"gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MattYoon",
"id": 57797966,
"login": "MattYoon",
"node_id": "MDQ6VXNlcjU3Nzk3OTY2",
"organizations_url": "https://api.github.com/users/MattYoon/orgs",
"received_events_url": "https://api.github.com/users/MattYoon/received_events",
"repos_url": "https://api.github.com/users/MattYoon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MattYoon",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This is expected behavior. You must provide `cache_file_name` when performing `.map` on an in-memory dataset for the result to be cached."
] | 2023-08-01T11:58:58Z | 2023-08-17T14:03:01Z | 2023-08-17T14:03:00Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
`Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`.
### Steps to reproduce the bug
```python
# below code was run the second time so the map function can be loaded from cache if exists
from datasets import load_dataset, Dataset
dataset = load_dataset("tatsu-lab/alpaca")['train']
dataset = dataset.map(lambda x: {'input': x['input'] + 'hi'}) # some random map
print(len(dataset.cache_files))
# 1
# copy the exact same data but initialize from a dictionary
memory_dataset = Dataset.from_dict({
'instruction': dataset['instruction'],
'input': dataset['input'],
'output': dataset['output'],
'text': dataset['text']})
memory_dataset = memory_dataset.map(lambda x: {'input': x['input'] + 'hi'}) # exact same map
print(len(memory_dataset.cache_files))
# Map: 100%|██████████| 52002[/52002]
# 0
```
### Expected behavior
The `map` function should create cache regardless of the method the `Dataset` was created.
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6110/timeline | null | completed | null | null | 386.067222 | 1,537 |
https://api.github.com/repos/huggingface/datasets/issues/6109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6109/comments | https://api.github.com/repos/huggingface/datasets/issues/6109/events | https://github.com/huggingface/datasets/issues/6109 | 1,830,753,793 | I_kwDODunzps5tHxYB | 6,109 | Problems in downloading Amazon reviews from HF | {
"avatar_url": "https://avatars.githubusercontent.com/u/52964960?v=4",
"events_url": "https://api.github.com/users/610v4nn1/events{/privacy}",
"followers_url": "https://api.github.com/users/610v4nn1/followers",
"following_url": "https://api.github.com/users/610v4nn1/following{/other_user}",
"gists_url": "https://api.github.com/users/610v4nn1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/610v4nn1",
"id": 52964960,
"login": "610v4nn1",
"node_id": "MDQ6VXNlcjUyOTY0OTYw",
"organizations_url": "https://api.github.com/users/610v4nn1/orgs",
"received_events_url": "https://api.github.com/users/610v4nn1/received_events",
"repos_url": "https://api.github.com/users/610v4nn1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/610v4nn1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/610v4nn1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/610v4nn1",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @610v4nn1.\r\n\r\nIndeed, the source data files are no longer available. We have contacted the authors of the dataset and they report that Amazon has decided to stop distributing the multilingual reviews dataset.\r\n\r\nWe are adding a notification about this issue to the dataset card.\r\n\r\... | 2023-08-01T08:38:29Z | 2024-06-25T13:48:38Z | 2023-08-02T07:12:07Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I have a script downloading `amazon_reviews_multi`.
When the download starts, I get
```
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.43MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 842.40it/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 928kB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.42s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 832.70it/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.81MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.40s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 1294.14it/s]
Generating train split: 0%| | 0/200000 [00:00<?, ? examples/s]
```
the file is clearly too small to contain the requested dataset, in fact it contains en error message:
```
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AGJWSY3ZADT2QVWE</RequestId><HostId>Gx1O2KXnxtQFqvzDLxyVSTq3+TTJuTnuVFnJL3SP89Yp8UzvYLPTVwd1PpniE4EvQzT3tCaqEJw=</HostId></Error>
```
obviously the script fails:
```
> raise DatasetGenerationError("An error occurred while generating the dataset") from e
E datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
1. load_dataset("amazon_reviews_multi", name="en", split="train", cache_dir="ADDYOURPATHHERE")
### Expected behavior
I would expect the dataset to be downloaded and processed
### Environment info
* The problem is present with both datasets 2.12.0 and 2.14.2
* python version 3.10.12 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6109/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6109/timeline | null | not_planned | null | null | 22.560556 | 1,538 |
https://api.github.com/repos/huggingface/datasets/issues/6106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6106/comments | https://api.github.com/repos/huggingface/datasets/issues/6106/events | https://github.com/huggingface/datasets/issues/6106 | 1,829,131,223 | I_kwDODunzps5tBlPX | 6,106 | load local json_file as dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4",
"events_url": "https://api.github.com/users/CiaoHe/events{/privacy}",
"followers_url": "https://api.github.com/users/CiaoHe/followers",
"following_url": "https://api.github.com/users/CiaoHe/following{/other_user}",
"gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CiaoHe",
"id": 39040787,
"login": "CiaoHe",
"node_id": "MDQ6VXNlcjM5MDQwNzg3",
"organizations_url": "https://api.github.com/users/CiaoHe/orgs",
"received_events_url": "https://api.github.com/users/CiaoHe/received_events",
"repos_url": "https://api.github.com/users/CiaoHe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CiaoHe",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! We use PyArrow to read JSON files, and PyArrow doesn't allow different value types in the same column. #5776 should address this.\r\n\r\nIn the meantime, you can combine `Dataset.from_generator` with the above code to cast the values to the same type. ",
"Thanks for your help!"
] | 2023-07-31T12:53:49Z | 2023-08-18T01:46:35Z | 2023-08-18T01:46:35Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type.
### Steps to reproduce the bug
1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)`
2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double
### Expected behavior
Should allow some columns are 'float' type, at least it should convert those columns to str type.
I tried to avoid the error by naively convert the float item to str:
```python
# if col type is not str, we need to convert it to str
mapping = {}
for col in keys:
if isinstance(dataset[0][col], str):
mapping[col] = [row.get(col) for row in dataset]
else:
mapping[col] = [str(row.get(col)) for row in dataset]
```
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4",
"events_url": "https://api.github.com/users/CiaoHe/events{/privacy}",
"followers_url": "https://api.github.com/users/CiaoHe/followers",
"following_url": "https://api.github.com/users/CiaoHe/following{/other_user}",
"gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CiaoHe",
"id": 39040787,
"login": "CiaoHe",
"node_id": "MDQ6VXNlcjM5MDQwNzg3",
"organizations_url": "https://api.github.com/users/CiaoHe/orgs",
"received_events_url": "https://api.github.com/users/CiaoHe/received_events",
"repos_url": "https://api.github.com/users/CiaoHe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CiaoHe",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6106/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6106/timeline | null | completed | null | null | 420.879444 | 1,541 |
https://api.github.com/repos/huggingface/datasets/issues/6100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6100/comments | https://api.github.com/repos/huggingface/datasets/issues/6100/events | https://github.com/huggingface/datasets/issues/6100 | 1,828,118,930 | I_kwDODunzps5s9uGS | 6,100 | TypeError when loading from GCP bucket | {
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bilelomrani1",
"id": 16692099,
"login": "bilelomrani1",
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bilelomrani1",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @bilelomrani1.\r\n\r\nWe are fixing it. ",
"We have fixed it. We are planning to do a patch release today."
] | 2023-07-30T23:03:00Z | 2023-08-03T10:00:48Z | 2023-08-01T10:38:55Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1.
### Steps to reproduce the bug
Load any file from a GCP bucket:
```python
import datasets
datasets.load_dataset("json", data_files=["gs://..."])
```
The following exception is raised:
```python
Traceback (most recent call last):
...
packages/datasets/data_files.py", line 335, in resolve_pattern
protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""
TypeError: can only concatenate tuple (not "str") to tuple
```
With a `GoogleFileSystem`, the attribute `fs.protocol` is a tuple `('gs', 'gcs')` and hence cannot be concatenated with a string.
### Expected behavior
The file should be loaded without exception.
### Environment info
- `datasets` version: 2.14.1
- Platform: macOS-13.2.1-x86_64-i386-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6100/timeline | null | completed | null | null | 35.598611 | 1,547 |
https://api.github.com/repos/huggingface/datasets/issues/6099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6099/comments | https://api.github.com/repos/huggingface/datasets/issues/6099/events | https://github.com/huggingface/datasets/issues/6099 | 1,827,893,576 | I_kwDODunzps5s83FI | 6,099 | How do i get "amazon_us_reviews | {
"avatar_url": "https://avatars.githubusercontent.com/u/57810189?v=4",
"events_url": "https://api.github.com/users/IqraBaluch/events{/privacy}",
"followers_url": "https://api.github.com/users/IqraBaluch/followers",
"following_url": "https://api.github.com/users/IqraBaluch/following{/other_user}",
"gists_url": "https://api.github.com/users/IqraBaluch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IqraBaluch",
"id": 57810189,
"login": "IqraBaluch",
"node_id": "MDQ6VXNlcjU3ODEwMTg5",
"organizations_url": "https://api.github.com/users/IqraBaluch/orgs",
"received_events_url": "https://api.github.com/users/IqraBaluch/received_events",
"repos_url": "https://api.github.com/users/IqraBaluch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IqraBaluch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IqraBaluch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IqraBaluch",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Seems like the problem isn't with the library, but the dataset itself hosted on AWS S3.\r\n\r\nIts [homepage](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) returns an `AccessDenied` XML response, which is the same thing you get if you try to log the `record` that triggers the exception\r\n\r\n```python\... | 2023-07-30T11:02:17Z | 2023-08-21T05:08:08Z | 2023-08-10T05:02:35Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
I have been trying to load 'amazon_us_dataset" but unable to do so.
`amazon_us_reviews = load_dataset('amazon_us_reviews')`
`print(amazon_us_reviews)`
> [ValueError: Config name is missing.
Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02']
Example of usage:
`load_dataset('amazon_us_reviews', 'Wireless_v1_00')`]
__________________________________________________________________________
`amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00')
print(amazon_us_reviews)`
**ERROR**
`Generating` train split: 0%
0/960872 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1692 )
-> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record
1694 writer.write(example, key)
11 frames
KeyError: 'marketplace'
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1711 e = e.__context__
-> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1713
1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
### Motivation
The dataset I'm using
https://huggingface.co/datasets/amazon_us_reviews
### Your contribution
What is the best way to load this data | {
"avatar_url": "https://avatars.githubusercontent.com/u/57810189?v=4",
"events_url": "https://api.github.com/users/IqraBaluch/events{/privacy}",
"followers_url": "https://api.github.com/users/IqraBaluch/followers",
"following_url": "https://api.github.com/users/IqraBaluch/following{/other_user}",
"gists_url": "https://api.github.com/users/IqraBaluch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IqraBaluch",
"id": 57810189,
"login": "IqraBaluch",
"node_id": "MDQ6VXNlcjU3ODEwMTg5",
"organizations_url": "https://api.github.com/users/IqraBaluch/orgs",
"received_events_url": "https://api.github.com/users/IqraBaluch/received_events",
"repos_url": "https://api.github.com/users/IqraBaluch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IqraBaluch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IqraBaluch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IqraBaluch",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6099/timeline | null | completed | null | null | 258.005 | 1,548 |
https://api.github.com/repos/huggingface/datasets/issues/6097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6097/comments | https://api.github.com/repos/huggingface/datasets/issues/6097/events | https://github.com/huggingface/datasets/issues/6097 | 1,827,054,143 | I_kwDODunzps5s5qI_ | 6,097 | Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format | {
"avatar_url": "https://avatars.githubusercontent.com/u/2538048?v=4",
"events_url": "https://api.github.com/users/aschoenauer-sebag/events{/privacy}",
"followers_url": "https://api.github.com/users/aschoenauer-sebag/followers",
"following_url": "https://api.github.com/users/aschoenauer-sebag/following{/other_user}",
"gists_url": "https://api.github.com/users/aschoenauer-sebag/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aschoenauer-sebag",
"id": 2538048,
"login": "aschoenauer-sebag",
"node_id": "MDQ6VXNlcjI1MzgwNDg=",
"organizations_url": "https://api.github.com/users/aschoenauer-sebag/orgs",
"received_events_url": "https://api.github.com/users/aschoenauer-sebag/received_events",
"repos_url": "https://api.github.com/users/aschoenauer-sebag/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aschoenauer-sebag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aschoenauer-sebag/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aschoenauer-sebag",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Actually, my bad -- specifying\r\n```python\r\nfoo.set_format('numpy', ['vectors'], output_all_columns=True)\r\n```\r\nfixes it."
] | 2023-07-28T20:31:59Z | 2023-07-28T20:49:58Z | 2023-07-28T20:49:58Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hi team!
I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case if not using the `set_format` method: you can also retrieve any other feature value, such as an index/id/etc.
Are you able to reproduce what I observe?
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]}
foo = Dataset.from_dict(foo)
foo.set_format('numpy', ['vectors'])
foo.add_faiss_index('vectors')
new_vector = np.random.random(1024)
scores, res = foo.get_nearest_examples('vectors', new_vector, k=3)
```
This will return, for the resulting most similar vectors to `new_vector` - in particular it will not return the `ids` feature:
```
{'vectors': array([[random values ...]])}
```
### Expected behavior
The expected behavior happens when the `set_format` method is not called:
```python
from datasets import Dataset
import numpy as np
foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]}
foo = Dataset.from_dict(foo)
# foo.set_format('numpy', ['vectors'])
foo.add_faiss_index('vectors')
new_vector = np.random.random(1024)
scores, res = foo.get_nearest_examples('vectors', new_vector, k=3)
```
This *will* return the `ids` of the similar vectors - with unfortunately a list of lists in lieu of the array I think for caching reasons - read it elsewhere
```
{'vectors': [[random values on multiple lines...]], 'ids': ['x', 'y', 'z']}
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"avatar_url": "https://avatars.githubusercontent.com/u/2538048?v=4",
"events_url": "https://api.github.com/users/aschoenauer-sebag/events{/privacy}",
"followers_url": "https://api.github.com/users/aschoenauer-sebag/followers",
"following_url": "https://api.github.com/users/aschoenauer-sebag/following{/other_user}",
"gists_url": "https://api.github.com/users/aschoenauer-sebag/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aschoenauer-sebag",
"id": 2538048,
"login": "aschoenauer-sebag",
"node_id": "MDQ6VXNlcjI1MzgwNDg=",
"organizations_url": "https://api.github.com/users/aschoenauer-sebag/orgs",
"received_events_url": "https://api.github.com/users/aschoenauer-sebag/received_events",
"repos_url": "https://api.github.com/users/aschoenauer-sebag/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aschoenauer-sebag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aschoenauer-sebag/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aschoenauer-sebag",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6097/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6097/timeline | null | completed | null | null | 0.299722 | 1,550 |
https://api.github.com/repos/huggingface/datasets/issues/6090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6090/comments | https://api.github.com/repos/huggingface/datasets/issues/6090/events | https://github.com/huggingface/datasets/issues/6090 | 1,825,865,043 | I_kwDODunzps5s1H1T | 6,090 | FilesIterable skips all the files after a hidden file | {
"avatar_url": "https://avatars.githubusercontent.com/u/10785413?v=4",
"events_url": "https://api.github.com/users/dkrivosic/events{/privacy}",
"followers_url": "https://api.github.com/users/dkrivosic/followers",
"following_url": "https://api.github.com/users/dkrivosic/following{/other_user}",
"gists_url": "https://api.github.com/users/dkrivosic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dkrivosic",
"id": 10785413,
"login": "dkrivosic",
"node_id": "MDQ6VXNlcjEwNzg1NDEz",
"organizations_url": "https://api.github.com/users/dkrivosic/orgs",
"received_events_url": "https://api.github.com/users/dkrivosic/received_events",
"repos_url": "https://api.github.com/users/dkrivosic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dkrivosic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkrivosic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dkrivosic",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting. We've merged a PR with a fix."
] | 2023-07-28T07:25:57Z | 2023-07-28T10:51:14Z | 2023-07-28T10:50:11Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file.
The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manager.py#L233C26-L233C26) where `return` should be replaced by `continue`.
### Steps to reproduce the bug
https://colab.research.google.com/drive/1SQlxs4y_LSo1Q89KnFoYDSyyKEISun_J#scrollTo=93K4_blkW-8-
### Expected behavior
The script should print all the files except the hidden one.
### Environment info
- `datasets` version: 2.14.1
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6090/timeline | null | completed | null | null | 3.403889 | 1,557 |
https://api.github.com/repos/huggingface/datasets/issues/6088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6088/comments | https://api.github.com/repos/huggingface/datasets/issues/6088/events | https://github.com/huggingface/datasets/issues/6088 | 1,825,665,235 | I_kwDODunzps5s0XDT | 6,088 | Loading local data files initiates web requests | {
"avatar_url": "https://avatars.githubusercontent.com/u/23375707?v=4",
"events_url": "https://api.github.com/users/lytning98/events{/privacy}",
"followers_url": "https://api.github.com/users/lytning98/followers",
"following_url": "https://api.github.com/users/lytning98/following{/other_user}",
"gists_url": "https://api.github.com/users/lytning98/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lytning98",
"id": 23375707,
"login": "lytning98",
"node_id": "MDQ6VXNlcjIzMzc1NzA3",
"organizations_url": "https://api.github.com/users/lytning98/orgs",
"received_events_url": "https://api.github.com/users/lytning98/received_events",
"repos_url": "https://api.github.com/users/lytning98/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lytning98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lytning98/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lytning98",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2023-07-28T04:06:26Z | 2023-07-28T05:02:22Z | 2023-07-28T05:02:22Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by
```python
# Load a JSON file
from datasets import load_dataset
ds = load_dataset('json', data_files='path/to/local/my_dataset.json')
```
But this failed on a web request because I'm executing the script on a machine without Internet access. Stacktrace shows
```
in PackagedDatasetModuleFactory.__init__(self, name, data_dir, data_files, download_config, download_mode)
940 self.download_config = download_config
941 self.download_mode = download_mode
--> 942 increase_load_count(name, resource_type="dataset")
```
I've read from the source code that this can be fixed by setting environment variable to run in offline mode. I'm just wondering that is this an expected behaviour that even loading a LOCAL JSON file requires Internet access by default? And what's the point of requesting to `increase_load_count` on some server when loading just LOCAL data files? | {
"avatar_url": "https://avatars.githubusercontent.com/u/23375707?v=4",
"events_url": "https://api.github.com/users/lytning98/events{/privacy}",
"followers_url": "https://api.github.com/users/lytning98/followers",
"following_url": "https://api.github.com/users/lytning98/following{/other_user}",
"gists_url": "https://api.github.com/users/lytning98/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lytning98",
"id": 23375707,
"login": "lytning98",
"node_id": "MDQ6VXNlcjIzMzc1NzA3",
"organizations_url": "https://api.github.com/users/lytning98/orgs",
"received_events_url": "https://api.github.com/users/lytning98/received_events",
"repos_url": "https://api.github.com/users/lytning98/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lytning98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lytning98/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lytning98",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6088/timeline | null | completed | null | null | 0.932222 | 1,559 |
https://api.github.com/repos/huggingface/datasets/issues/6087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6087/comments | https://api.github.com/repos/huggingface/datasets/issues/6087/events | https://github.com/huggingface/datasets/issues/6087 | 1,825,133,741 | I_kwDODunzps5syVSt | 6,087 | fsspec dependency is set too low | {
"avatar_url": "https://avatars.githubusercontent.com/u/1085885?v=4",
"events_url": "https://api.github.com/users/iXce/events{/privacy}",
"followers_url": "https://api.github.com/users/iXce/followers",
"following_url": "https://api.github.com/users/iXce/following{/other_user}",
"gists_url": "https://api.github.com/users/iXce/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iXce",
"id": 1085885,
"login": "iXce",
"node_id": "MDQ6VXNlcjEwODU4ODU=",
"organizations_url": "https://api.github.com/users/iXce/orgs",
"received_events_url": "https://api.github.com/users/iXce/received_events",
"repos_url": "https://api.github.com/users/iXce/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iXce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iXce/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iXce",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! A PR with a fix has just been merged."
] | 2023-07-27T20:08:22Z | 2023-07-28T10:07:56Z | 2023-07-28T10:07:03Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
fsspec.callbacks.TqdmCallback (used in https://github.com/huggingface/datasets/blob/73bed12ecda17d1573fd3bf73ed5db24d3622f86/src/datasets/utils/file_utils.py#L338) was first released in fsspec [2022.3.0](https://github.com/fsspec/filesystem_spec/releases/tag/2022.3.0, commit where it was added: https://github.com/fsspec/filesystem_spec/commit/9577c8a482eb0a69092913b81580942a68d66a76#diff-906155c7e926a9ff58b9f23369bb513b09b445f5b0f41fa2a84015d0b471c68cR180), however the dependency is set to 2021.11.1 https://github.com/huggingface/datasets/blob/main/setup.py#L129
### Steps to reproduce the bug
1. Install fsspec==2021.11.1
2. Install latest datasets==2.14.1
3. Import datasets, import fails due to lack of `fsspec.callbacks.TqdmCallback`
### Expected behavior
No import issue
### Environment info
N/A | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6087/timeline | null | completed | null | null | 13.978056 | 1,560 |
https://api.github.com/repos/huggingface/datasets/issues/6086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6086/comments | https://api.github.com/repos/huggingface/datasets/issues/6086/events | https://github.com/huggingface/datasets/issues/6086 | 1,825,009,268 | I_kwDODunzps5sx250 | 6,086 | Support `fsspec` in `Dataset.to_<format>` methods | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}"... | null | [
"Hi @mariosasko unless someone's already working on it, I guess I can tackle it!",
"Hi! Sure, feel free to tackle this.",
"#self-assign",
"I'm assuming this should just cover `to_csv`, `to_parquet`, and `to_json`, right? As `to_list` and `to_dict` just return Python objects, `to_pandas` returns a `pandas.Data... | 2023-07-27T19:08:37Z | 2024-03-07T07:22:43Z | 2024-03-07T07:22:42Z | COLLABORATOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Supporting this should be fairly easy.
Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353). | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6086/timeline | null | completed | null | null | 5,364.234722 | 1,561 |
https://api.github.com/repos/huggingface/datasets/issues/6079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6079/comments | https://api.github.com/repos/huggingface/datasets/issues/6079/events | https://github.com/huggingface/datasets/issues/6079 | 1,822,597,471 | I_kwDODunzps5soqFf | 6,079 | Iterating over DataLoader based on HF datasets is stuck forever | {
"avatar_url": "https://avatars.githubusercontent.com/u/5454868?v=4",
"events_url": "https://api.github.com/users/arindamsarkar93/events{/privacy}",
"followers_url": "https://api.github.com/users/arindamsarkar93/followers",
"following_url": "https://api.github.com/users/arindamsarkar93/following{/other_user}",
"gists_url": "https://api.github.com/users/arindamsarkar93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arindamsarkar93",
"id": 5454868,
"login": "arindamsarkar93",
"node_id": "MDQ6VXNlcjU0NTQ4Njg=",
"organizations_url": "https://api.github.com/users/arindamsarkar93/orgs",
"received_events_url": "https://api.github.com/users/arindamsarkar93/received_events",
"repos_url": "https://api.github.com/users/arindamsarkar93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arindamsarkar93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arindamsarkar93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arindamsarkar93",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"When the process starts to hang, can you interrupt it with CTRL + C and paste the error stack trace here? ",
"Thanks @mariosasko for your prompt response, here's the stack trace:\r\n\r\n```\r\nKeyboardInterrupt Traceback (most recent call last)\r\nCell In[12], line 4\r\n 2 t = time.t... | 2023-07-26T14:52:37Z | 2024-02-07T17:46:52Z | 2023-07-30T14:09:06Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment.
I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What should be my next steps here?
### Steps to reproduce the bug
```
train_dataset = load_dataset(
"parquet", data_files = {'train': tr_data_path + '*.parquet'},
split = 'train',
collate_fn = streaming_data_collate_fn,
streaming = True
).with_format('torch')
train_dataloader = DataLoader(train_dataset, batch_size = 2, num_workers = 0)
t = time.time()
iter_ = 0
for batch in train_dataloader:
iter_ += 1
if iter_ == 1000:
break
print (time.time() - t)
```
### Expected behavior
The snippet should work normally and load the next batch of data.
### Environment info
datasets: '2.14.0'
pyarrow: '12.0.0'
torch: '2.0.0'
Python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0]
!uname -r
5.10.178-162.673.amzn2.x86_64 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5454868?v=4",
"events_url": "https://api.github.com/users/arindamsarkar93/events{/privacy}",
"followers_url": "https://api.github.com/users/arindamsarkar93/followers",
"following_url": "https://api.github.com/users/arindamsarkar93/following{/other_user}",
"gists_url": "https://api.github.com/users/arindamsarkar93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arindamsarkar93",
"id": 5454868,
"login": "arindamsarkar93",
"node_id": "MDQ6VXNlcjU0NTQ4Njg=",
"organizations_url": "https://api.github.com/users/arindamsarkar93/orgs",
"received_events_url": "https://api.github.com/users/arindamsarkar93/received_events",
"repos_url": "https://api.github.com/users/arindamsarkar93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arindamsarkar93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arindamsarkar93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arindamsarkar93",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6079/timeline | null | completed | null | null | 95.274722 | 1,568 |
https://api.github.com/repos/huggingface/datasets/issues/6078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6078/comments | https://api.github.com/repos/huggingface/datasets/issues/6078/events | https://github.com/huggingface/datasets/issues/6078 | 1,822,501,472 | I_kwDODunzps5soSpg | 6,078 | resume_download with streaming=True | {
"avatar_url": "https://avatars.githubusercontent.com/u/72763959?v=4",
"events_url": "https://api.github.com/users/NicolasMICAUX/events{/privacy}",
"followers_url": "https://api.github.com/users/NicolasMICAUX/followers",
"following_url": "https://api.github.com/users/NicolasMICAUX/following{/other_user}",
"gists_url": "https://api.github.com/users/NicolasMICAUX/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NicolasMICAUX",
"id": 72763959,
"login": "NicolasMICAUX",
"node_id": "MDQ6VXNlcjcyNzYzOTU5",
"organizations_url": "https://api.github.com/users/NicolasMICAUX/orgs",
"received_events_url": "https://api.github.com/users/NicolasMICAUX/received_events",
"repos_url": "https://api.github.com/users/NicolasMICAUX/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NicolasMICAUX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NicolasMICAUX/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NicolasMICAUX",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Currently, it's not possible to efficiently resume streaming after an error. Eventually, we plan to support this for Parquet (see https://github.com/huggingface/datasets/issues/5380). ",
"Ok thank you for your answer",
"I'm closing this as a duplicate of #5380"
] | 2023-07-26T14:08:22Z | 2023-07-28T11:05:03Z | 2023-07-28T11:05:03Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I used:
```
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True,
split="train"
)
```
Unfortunately, the server had a problem during the training process. I saved the step my training stopped at.
But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset?
`download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True.
### Steps to reproduce the bug
```
from datasets import load_dataset, DownloadConfig
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True, # optional
split="train",
download_config=DownloadConfig(resume_download=True)
)
# interupt the run and try to relaunch it => this restart from scratch
```
### Expected behavior
I would expect a parameter to start streaming from a given index in the dataset.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6078/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6078/timeline | null | completed | null | null | 44.944722 | 1,569 |
https://api.github.com/repos/huggingface/datasets/issues/6075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6075/comments | https://api.github.com/repos/huggingface/datasets/issues/6075/events | https://github.com/huggingface/datasets/issues/6075 | 1,822,341,398 | I_kwDODunzps5snrkW | 6,075 | Error loading music files using `load_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/susnato",
"id": 56069179,
"login": "susnato",
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"repos_url": "https://api.github.com/users/susnato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/susnato",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This code behaves as expected on my local machine or in Colab. Which version of `soundfile` do you have installed? MP3 requires `soundfile>=0.12.1`.",
"I upgraded the `soundfile` and it's working now! \r\nThanks @mariosasko for the help!"
] | 2023-07-26T12:44:05Z | 2023-07-26T13:08:08Z | 2023-07-26T13:08:08Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test
I got the following error -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem
formatted_output = format_table(
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table
return formatter(pa_table, query_type=query_type)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__
return self.format_column(pa_table)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example
array, sampling_rate = sf.read(f)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read
with SoundFile(file, 'r', samplerate, channels,
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__
self._file = self._open(file, mode_int, closefd)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open
_error_check(_snd.sf_error(file_ptr),
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check
raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised.
```
### Steps to reproduce the bug
Code to reproduce the error -
```python
from datasets import load_dataset
ds = load_dataset("susnato/pop2piano_real_music_test", split="test")
print(ds[0])
```
### Expected behavior
I should be able to read the music file without any error.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/susnato",
"id": 56069179,
"login": "susnato",
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"repos_url": "https://api.github.com/users/susnato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/susnato",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6075/timeline | null | completed | null | null | 0.400833 | 1,572 |
https://api.github.com/repos/huggingface/datasets/issues/6073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6073/comments | https://api.github.com/repos/huggingface/datasets/issues/6073/events | https://github.com/huggingface/datasets/issues/6073 | 1,822,167,804 | I_kwDODunzps5snBL8 | 6,073 | version2.3.2 load_dataset()data_files can't include .xxxx in path | {
"avatar_url": "https://avatars.githubusercontent.com/u/45893496?v=4",
"events_url": "https://api.github.com/users/BUAAChuanWang/events{/privacy}",
"followers_url": "https://api.github.com/users/BUAAChuanWang/followers",
"following_url": "https://api.github.com/users/BUAAChuanWang/following{/other_user}",
"gists_url": "https://api.github.com/users/BUAAChuanWang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BUAAChuanWang",
"id": 45893496,
"login": "BUAAChuanWang",
"node_id": "MDQ6VXNlcjQ1ODkzNDk2",
"organizations_url": "https://api.github.com/users/BUAAChuanWang/orgs",
"received_events_url": "https://api.github.com/users/BUAAChuanWang/received_events",
"repos_url": "https://api.github.com/users/BUAAChuanWang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BUAAChuanWang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BUAAChuanWang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BUAAChuanWang",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Version 2.3.2 is over one year old, so please use the latest release (2.14.0) to get the expected behavior. Version 2.3.2 does not contain some fixes we made to fix resolving hidden files/directories (starting with a dot)."
] | 2023-07-26T11:09:31Z | 2023-08-29T15:53:59Z | 2023-08-29T15:53:59Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
First, I cd workdir.
Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
that couldn't work and
<FileNotFoundError: Unable to find
'/a/b/c/.d/train/train.jsonl' at
/a/b/c/.d/>
And I debug, it is fine in version2.1.2
So there maybe a bug in path join.
Here is the whole bug report:
/x/datasets/loa │
│ d.py:1656 in load_dataset │
│ │
│ 1653 │ ignore_verifications = ignore_verifications or save_infos │
│ 1654 │ │
│ 1655 │ # Create a dataset builder │
│ ❱ 1656 │ builder_instance = load_dataset_builder( │
│ 1657 │ │ path=path, │
│ 1658 │ │ name=name, │
│ 1659 │ │ data_dir=data_dir, │
│ │
│ x/datasets/loa │
│ d.py:1439 in load_dataset_builder │
│ │
│ 1436 │ if use_auth_token is not None: │
│ 1437 │ │ download_config = download_config.copy() if download_config e │
│ 1438 │ │ download_config.use_auth_token = use_auth_token │
│ ❱ 1439 │ dataset_module = dataset_module_factory( │
│ 1440 │ │ path, │
│ 1441 │ │ revision=revision, │
│ 1442 │ │ download_config=download_config, │
│ │
│ x/datasets/loa │
│ d.py:1097 in dataset_module_factory │
│ │
│ 1094 │ │
│ 1095 │ # Try packaged │
│ 1096 │ if path in _PACKAGED_DATASETS_MODULES: │
│ ❱ 1097 │ │ return PackagedDatasetModuleFactory( │
│ 1098 │ │ │ path, │
│ 1099 │ │ │ data_dir=data_dir, │
│ 1100 │ │ │ data_files=data_files, │
│ │
│x/datasets/loa │
│ d.py:743 in get_module │
│ │
│ 740 │ │ │ if self.data_dir is not None │
│ 741 │ │ │ else get_patterns_locally(str(Path().resolve())) │
│ 742 │ │ ) │
│ ❱ 743 │ │ data_files = DataFilesDict.from_local_or_remote( │
│ 744 │ │ │ patterns, │
│ 745 │ │ │ use_auth_token=self.download_config.use_auth_token, │
│ 746 │ │ │ base_path=str(Path(self.data_dir).resolve()) if self.data │
│ │
│ x/datasets/dat │
│ a_files.py:590 in from_local_or_remote │
│ │
│ 587 │ │ out = cls() │
│ 588 │ │ for key, patterns_for_key in patterns.items(): │
│ 589 │ │ │ out[key] = ( │
│ ❱ 590 │ │ │ │ DataFilesList.from_local_or_remote( │
│ 591 │ │ │ │ │ patterns_for_key, │
│ 592 │ │ │ │ │ base_path=base_path, │
│ 593 │ │ │ │ │ allowed_extensions=allowed_extensions, │
│ │
│ /x/datasets/dat │
│ a_files.py:558 in from_local_or_remote │
│ │
│ 555 │ │ use_auth_token: Optional[Union[bool, str]] = None, │
│ 556 │ ) -> "DataFilesList": │
│ 557 │ │ base_path = base_path if base_path is not None else str(Path() │
│ ❱ 558 │ │ data_files = resolve_patterns_locally_or_by_urls(base_path, pa │
│ 559 │ │ origin_metadata = _get_origin_metadata_locally_or_by_urls(data │
│ 560 │ │ return cls(data_files, origin_metadata) │
│ 561 │
│ │
│ /x/datasets/dat │
│ a_files.py:195 in resolve_patterns_locally_or_by_urls │
│ │
│ 192 │ │ if is_remote_url(pattern): │
│ 193 │ │ │ data_files.append(Url(pattern)) │
│ 194 │ │ else: │
│ ❱ 195 │ │ │ for path in _resolve_single_pattern_locally(base_path, pat │
│ 196 │ │ │ │ data_files.append(path) │
│ 197 │ │
│ 198 │ if not data_files: │
│ │
│ /x/datasets/dat │
│ a_files.py:145 in _resolve_single_pattern_locally │
│ │
│ 142 │ │ error_msg = f"Unable to find '{pattern}' at {Path(base_path).r │
│ 143 │ │ if allowed_extensions is not None: │
│ 144 │ │ │ error_msg += f" with any supported extension {list(allowed │
│ ❱ 145 │ │ raise FileNotFoundError(error_msg) │
│ 146 │ return sorted(out) │
│ 147
### Steps to reproduce the bug
1. Version=2.3.2
2. In shell, cd workdir.(cd /a/b/c/.d/)
3. load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
### Expected behavior
fix it please~
### Environment info
2.3.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6073/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6073/timeline | null | completed | null | null | 820.741111 | 1,574 |
https://api.github.com/repos/huggingface/datasets/issues/6071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6071/comments | https://api.github.com/repos/huggingface/datasets/issues/6071/events | https://github.com/huggingface/datasets/issues/6071 | 1,821,990,749 | I_kwDODunzps5smV9d | 6,071 | storage_options provided to load_dataset not fully piping through since datasets 2.14.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?",
"Hi @lhoestq ! Thank you so much 🙌 \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a... | 2023-07-26T09:37:20Z | 2023-07-27T12:42:58Z | 2023-07-27T12:42:58Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()`
### Steps to reproduce the bug
```python
import fsspec
import pandas as pd
import datasets
# Generate mock parquet file
data_files = "demo.parquet"
pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files)
_storage_options = {"x": 1, "y": 2}
fs = fsspec.filesystem("file", **_storage_options)
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options
)
```
Looking at the `storage_options` resolved here:
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331
they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339
the call will fail if the user-provided `storage_options` were needed.
---
A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly:
```python
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options,
download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}),
)
```
### Expected behavior
`storage_options` provided to `load_dataset` take effect in all backend filesystem operations.
### Environment info
datasets==2.14.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6071/timeline | null | completed | null | null | 27.093889 | 1,576 |
https://api.github.com/repos/huggingface/datasets/issues/6069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6069/comments | https://api.github.com/repos/huggingface/datasets/issues/6069/events | https://github.com/huggingface/datasets/issues/6069 | 1,820,831,535 | I_kwDODunzps5sh68v | 6,069 | KeyError: dataset has no key "image" | {
"avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4",
"events_url": "https://api.github.com/users/etetteh/events{/privacy}",
"followers_url": "https://api.github.com/users/etetteh/followers",
"following_url": "https://api.github.com/users/etetteh/following{/other_user}",
"gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/etetteh",
"id": 28512232,
"login": "etetteh",
"node_id": "MDQ6VXNlcjI4NTEyMjMy",
"organizations_url": "https://api.github.com/users/etetteh/orgs",
"received_events_url": "https://api.github.com/users/etetteh/received_events",
"repos_url": "https://api.github.com/users/etetteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/etetteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/etetteh",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"You can list the dataset's columns with `ds.column_names` before `.map` to check whether the dataset has an `image` column. If it doesn't, then this is a bug. Otherwise, please paste the line with the `.map` call.\r\n\r\n\r\n",
"This is the piece of code I am running:\r\n```\r\ndata_transforms = utils.get_data_a... | 2023-07-25T17:45:50Z | 2024-09-06T08:16:16Z | 2023-07-27T12:42:17Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I've loaded a local image dataset with:
`ds = laod_dataset("imagefolder", data_dir=path-to-data)`
And defined a transform to process the data, following the Datasets docs.
However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch sent to the transformation function, it shows only the labels are being sent to the function.
For some reason, the images are not in the example batches.
### Steps to reproduce the bug
I'm using the latest stable version of datasets
### Expected behavior
I expect the example_batches to contain both images and labels
### Environment info
I'm using the latest stable version of datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4",
"events_url": "https://api.github.com/users/etetteh/events{/privacy}",
"followers_url": "https://api.github.com/users/etetteh/followers",
"following_url": "https://api.github.com/users/etetteh/following{/other_user}",
"gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/etetteh",
"id": 28512232,
"login": "etetteh",
"node_id": "MDQ6VXNlcjI4NTEyMjMy",
"organizations_url": "https://api.github.com/users/etetteh/orgs",
"received_events_url": "https://api.github.com/users/etetteh/received_events",
"repos_url": "https://api.github.com/users/etetteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/etetteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/etetteh",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6069/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6069/timeline | null | completed | null | null | 42.940833 | 1,578 |
https://api.github.com/repos/huggingface/datasets/issues/6066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6066/comments | https://api.github.com/repos/huggingface/datasets/issues/6066/events | https://github.com/huggingface/datasets/issues/6066 | 1,819,717,542 | I_kwDODunzps5sdq-m | 6,066 | AttributeError: '_tqdm_cls' object has no attribute '_lock' | {
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! I opened https://github.com/huggingface/datasets/pull/6067 to add the missing `_lock`\r\n\r\nWe'll do a patch release soon, but feel free to install `datasets` from source in the meantime",
"I have tested the latest main, it does not work.\r\n\r\nI add more logs to reproduce this issue, it looks like a mult... | 2023-07-25T07:24:36Z | 2023-07-26T10:56:25Z | 2023-07-26T10:56:24Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
```python
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 671, in from_patterns
DataFilesList.from_patterns(
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 586, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 502, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 70, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 48, in _executor_map
with ensure_lock(tqdm_class, lock_name=lock_name) as lk:
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 25, in ensure_lock
del tqdm_class._lock
^^^^^^^^^^^^^^^^
AttributeError: '_tqdm_cls' object has no attribute '_lock'
```
### Steps to reproduce the bug
Happens ocasionally.
### Expected behavior
I added a print in tqdm `ensure_lock()`, got a `ensure_lock <datasets.utils.logging._tqdm_cls object at 0x16dddead0> ` print.
According to the code in https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L24
```python
@contextmanager
def ensure_lock(tqdm_class, lock_name=""):
"""get (create if necessary) and then restore `tqdm_class`'s lock"""
print("ensure_lock", tqdm_class, lock_name)
old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock
lock = old_lock or tqdm_class.get_lock() # maybe create a new lock
lock = getattr(lock, lock_name, lock) # maybe subtype
tqdm_class.set_lock(lock)
yield lock
if old_lock is None:
del tqdm_class._lock # <-- It tries to del the `_lock` attribute from tqdm_class.
else:
tqdm_class.set_lock(old_lock)
```
But, huggingface datasets `datasets.utils.logging._tqdm_cls` does not have the field `_lock`: https://github.com/huggingface/datasets/blob/main/src/datasets/utils/logging.py#L205
```python
class _tqdm_cls:
def __call__(self, *args, disable=False, **kwargs):
if _tqdm_active and not disable:
return tqdm_lib.tqdm(*args, **kwargs)
else:
return EmptyTqdm(*args, **kwargs)
def set_lock(self, *args, **kwargs):
self._lock = None
if _tqdm_active:
return tqdm_lib.tqdm.set_lock(*args, **kwargs)
def get_lock(self):
if _tqdm_active:
return tqdm_lib.tqdm.get_lock()
```
### Environment info
Python 3.11.4
tqdm '4.65.0'
datasets master | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6066/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6066/timeline | null | completed | null | null | 27.53 | 1,581 |
https://api.github.com/repos/huggingface/datasets/issues/6060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6060/comments | https://api.github.com/repos/huggingface/datasets/issues/6060/events | https://github.com/huggingface/datasets/issues/6060 | 1,816,614,120 | I_kwDODunzps5sR1To | 6,060 | Dataset.map() execute twice when in PyTorch DDP mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4",
"events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}",
"followers_url": "https://api.github.com/users/wanghaoyucn/followers",
"following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}",
"gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wanghaoyucn",
"id": 39429965,
"login": "wanghaoyucn",
"node_id": "MDQ6VXNlcjM5NDI5OTY1",
"organizations_url": "https://api.github.com/users/wanghaoyucn/orgs",
"received_events_url": "https://api.github.com/users/wanghaoyucn/received_events",
"repos_url": "https://api.github.com/users/wanghaoyucn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wanghaoyucn",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Sorry for asking a duplicate question about `num_proc`, I searched the forum and find the solution.\r\n\r\nBut I still can't make the trick with `torch.distributed.barrier()` to only map at the main process work. The [post on forum]( https://discuss.huggingface.co/t/slow-processing-with-map-when-using-deepspeed-or... | 2023-07-22T05:06:43Z | 2024-01-22T18:35:12Z | 2024-01-22T18:35:12Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. When I am training model, it will map twice. When I am running a test for dataset and dataloader (just print the batches), it can work. Their code about loading dataset are same.
And on another server with 30 CPU cores, I use 2 GPUs and it can't work neither.
I have tried to use `rank` and `local_rank` to check, they all didn't make sense.
### Steps to reproduce the bug
use `torchrun --standalone --nproc_per_node=2 train.py` or `torchrun --standalone train.py` to run
This is my code:
```python
if args.distributed and world_size > 1:
if args.local_rank > 0:
print(f"Rank {args.rank}: Gpu {args.gpu} waiting for main process to perform the mapping", force=True)
torch.distributed.barrier()
print("Mapping dataset")
dataset = dataset.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True), num_proc=8, desc="cut_reorder_keys")
dataset = dataset.map(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16), num_proc=8, desc="random_shift")
dataset_test = dataset_test.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=False), num_proc=8, desc="cut_reorder_keys")
if args.local_rank == 0:
print("Mapping finished, loading results from main process")
torch.distributed.barrier()
```
### Expected behavior
Only the main process will execute `map`, while the sub process will load cache from disk.
### Environment info
server with 64 CPU cores (AMD Ryzen Threadripper PRO 5995WX 64-Cores) and 2 RTX 4090
- `python==3.9.16`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `22.04.1-Ubuntu`
server with 30 CPU cores (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz) and 2 RTX 4090
- `python==3.9.0`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `Ubuntu 20.04` | {
"avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4",
"events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}",
"followers_url": "https://api.github.com/users/wanghaoyucn/followers",
"following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}",
"gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wanghaoyucn",
"id": 39429965,
"login": "wanghaoyucn",
"node_id": "MDQ6VXNlcjM5NDI5OTY1",
"organizations_url": "https://api.github.com/users/wanghaoyucn/orgs",
"received_events_url": "https://api.github.com/users/wanghaoyucn/received_events",
"repos_url": "https://api.github.com/users/wanghaoyucn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wanghaoyucn",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6060/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6060/timeline | null | completed | null | null | 4,429.474722 | 1,587 |
https://api.github.com/repos/huggingface/datasets/issues/6058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6058/comments | https://api.github.com/repos/huggingface/datasets/issues/6058/events | https://github.com/huggingface/datasets/issues/6058 | 1,815,131,397 | I_kwDODunzps5sMLUF | 6,058 | laion-coco download error | {
"avatar_url": "https://avatars.githubusercontent.com/u/54424110?v=4",
"events_url": "https://api.github.com/users/yangyijune/events{/privacy}",
"followers_url": "https://api.github.com/users/yangyijune/followers",
"following_url": "https://api.github.com/users/yangyijune/following{/other_user}",
"gists_url": "https://api.github.com/users/yangyijune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangyijune",
"id": 54424110,
"login": "yangyijune",
"node_id": "MDQ6VXNlcjU0NDI0MTEw",
"organizations_url": "https://api.github.com/users/yangyijune/orgs",
"received_events_url": "https://api.github.com/users/yangyijune/received_events",
"repos_url": "https://api.github.com/users/yangyijune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangyijune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangyijune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangyijune",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This can also mean one of the files was not downloaded correctly.\r\n\r\nWe log an erroneous file's name before raising the reader's error, so this is how you can find the problematic file. Then, you should delete it and call `load_dataset` again.\r\n\r\n(I checked all the uploaded files, and they seem to be valid... | 2023-07-21T04:24:15Z | 2023-07-22T01:42:06Z | 2023-07-22T01:42:06Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
The full trace:
```
/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de
precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no_checks' instead.
warnings.warn(
Downloading and preparing dataset parquet/laion--laion-coco to /home/bian/.cache/huggingface/datasets/laion___parquet/laion--
laion-coco-cb4205d7f1863066/0.0.0/bcacc8bdaa0614a5d73d0344c813275e590940c6ea8bc569da462847103a1afd...
Downloading data: 100%|█| 1.89G/1.89G [04:57<00:00,
Downloading data files: 100%|█| 1/1 [04:59<00:00, 2
Extracting data files: 100%|█| 1/1 [00:00<00:00, 13
Generating train split: 0 examples [00:00, ? examples/s]<_io.BufferedReader
name='/home/bian/.cache/huggingface/datasets/downlo
ads/26d7a016d25bbd9443115cfa3092136e8eb2f1f5bcd4154
0cb9234572927f04c'>
Traceback (most recent call last):
File "/home/bian/data/ZOC/download_laion_coco.py", line 4, in <module>
dataset = load_dataset("laion/laion-coco", ignore_verifications=True)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1842, in _prepare_split_single
generator = self._generate_tables(**gen_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in
_generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 323, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file
.
```
I have carefully followed the instructions in #5264 but still get the same error.
Other helpful information:
```
ds = load_dataset("parquet", data_files=
...: "https://huggingface.co/datasets/laion/l
...: aion-coco/resolve/d22869de3ccd39dfec1507
...: f7ded32e4a518dad24/part-00000-2256f782-1
...: 26f-4dc6-b9c6-e6757637749d-c000.snappy.p
...: arquet")
Found cached dataset parquet (/home/bian/.cache/huggingface/datasets/parquet/default-a02eea00aeb08b0e/0.0.0/bb8ccf89d9ee38581ff5e51506d721a9b37f14df8090dc9b2d8fb4a40957833f)
100%|██████████████| 1/1 [00:00<00:00, 4.55it/s]
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("laion/laion-coco", ignore_verifications=True/False)
```
### Expected behavior
Properly load Laion-coco dataset
### Environment info
datasets==2.11.0 torch==1.12.1 python 3.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/54424110?v=4",
"events_url": "https://api.github.com/users/yangyijune/events{/privacy}",
"followers_url": "https://api.github.com/users/yangyijune/followers",
"following_url": "https://api.github.com/users/yangyijune/following{/other_user}",
"gists_url": "https://api.github.com/users/yangyijune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangyijune",
"id": 54424110,
"login": "yangyijune",
"node_id": "MDQ6VXNlcjU0NDI0MTEw",
"organizations_url": "https://api.github.com/users/yangyijune/orgs",
"received_events_url": "https://api.github.com/users/yangyijune/received_events",
"repos_url": "https://api.github.com/users/yangyijune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangyijune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangyijune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangyijune",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6058/timeline | null | completed | null | null | 21.2975 | 1,589 |
https://api.github.com/repos/huggingface/datasets/issues/6057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6057/comments | https://api.github.com/repos/huggingface/datasets/issues/6057/events | https://github.com/huggingface/datasets/issues/6057 | 1,815,100,151 | I_kwDODunzps5sMDr3 | 6,057 | Why is the speed difference of gen example so big? | {
"avatar_url": "https://avatars.githubusercontent.com/u/46072190?v=4",
"events_url": "https://api.github.com/users/pixeli99/events{/privacy}",
"followers_url": "https://api.github.com/users/pixeli99/followers",
"following_url": "https://api.github.com/users/pixeli99/following{/other_user}",
"gists_url": "https://api.github.com/users/pixeli99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pixeli99",
"id": 46072190,
"login": "pixeli99",
"node_id": "MDQ6VXNlcjQ2MDcyMTkw",
"organizations_url": "https://api.github.com/users/pixeli99/orgs",
"received_events_url": "https://api.github.com/users/pixeli99/received_events",
"repos_url": "https://api.github.com/users/pixeli99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pixeli99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pixeli99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pixeli99",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi!\r\n\r\nIt's hard to explain this behavior without more information. Can you profile the slower version with the following code\r\n```python\r\nimport cProfile, pstats\r\nfrom datasets import load_dataset\r\n\r\nwith cProfile.Profile() as profiler:\r\n ds = load_dataset(...)\r\n\r\nstats = pstats.Stats(profi... | 2023-07-21T03:34:49Z | 2023-10-04T18:06:16Z | 2023-10-04T18:06:15Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('text_content')
image_data = open(image_path, "rb").read()
yield idx, {
"text": text_content,
"image": {
"path": image_path,
"bytes": image_data,
},
"conditioning_image": {
"path": image_path,
"bytes": image_data,
},
}
```
Hello,
I use the above function to deal with my local data set, but I am very surprised that the speed at which I generate example is very different. When I start a training task, **sometimes 1000examples/s, sometimes only 10examples/s.**

I'm not saying that speed is changing all the time. I mean, the reading speed is different in different training, which will cause me to start training over and over again until the speed of this generation of examples is normal.
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6057/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6057/timeline | null | completed | null | null | 1,814.523889 | 1,590 |
https://api.github.com/repos/huggingface/datasets/issues/6054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6054/comments | https://api.github.com/repos/huggingface/datasets/issues/6054/events | https://github.com/huggingface/datasets/issues/6054 | 1,813,271,304 | I_kwDODunzps5sFFMI | 6,054 | Multi-processed `Dataset.map` slows down a lot when `import torch` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47121592?v=4",
"events_url": "https://api.github.com/users/ShinoharaHare/events{/privacy}",
"followers_url": "https://api.github.com/users/ShinoharaHare/followers",
"following_url": "https://api.github.com/users/ShinoharaHare/following{/other_user}",
"gists_url": "https://api.github.com/users/ShinoharaHare/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ShinoharaHare",
"id": 47121592,
"login": "ShinoharaHare",
"node_id": "MDQ6VXNlcjQ3MTIxNTky",
"organizations_url": "https://api.github.com/users/ShinoharaHare/orgs",
"received_events_url": "https://api.github.com/users/ShinoharaHare/received_events",
"repos_url": "https://api.github.com/users/ShinoharaHare/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ShinoharaHare/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShinoharaHare/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ShinoharaHare",
"user_view_type": "public"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | [
"A duplicate of https://github.com/huggingface/datasets/issues/5929"
] | 2023-07-20T06:36:14Z | 2023-07-21T15:19:37Z | 2023-07-21T15:19:37Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it.
I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result.
BTW, `import lightning` also slows it down.
Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times.
- without `import torch` 
- with `import torch` 
### Steps to reproduce the bug
Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon.
```python3
from datasets import load_from_disk, disable_caching
from transformers import AutoTokenizer
# import torch
# import lightning
def rearrange_datapoints(
batch,
tokenizer,
sequence_length,
):
datapoints = []
input_ids = []
for x in batch['input_ids']:
input_ids += x
while len(input_ids) >= sequence_length:
datapoint = input_ids[:sequence_length]
datapoints.append(datapoint)
input_ids[:sequence_length] = []
if input_ids:
paddings = [-1] * (sequence_length - len(input_ids))
datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings
datapoints.append(datapoint)
batch['input_ids'] = datapoints
return batch
if __name__ == '__main__':
disable_caching()
tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False)
dataset = load_from_disk('...')
dataset = dataset.map(
rearrange_datapoints,
fn_kwargs=dict(
tokenizer=tokenizer,
sequence_length=2048,
),
batched=True,
num_proc=8,
)
```
### Expected behavior
The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47121592?v=4",
"events_url": "https://api.github.com/users/ShinoharaHare/events{/privacy}",
"followers_url": "https://api.github.com/users/ShinoharaHare/followers",
"following_url": "https://api.github.com/users/ShinoharaHare/following{/other_user}",
"gists_url": "https://api.github.com/users/ShinoharaHare/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ShinoharaHare",
"id": 47121592,
"login": "ShinoharaHare",
"node_id": "MDQ6VXNlcjQ3MTIxNTky",
"organizations_url": "https://api.github.com/users/ShinoharaHare/orgs",
"received_events_url": "https://api.github.com/users/ShinoharaHare/received_events",
"repos_url": "https://api.github.com/users/ShinoharaHare/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ShinoharaHare/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShinoharaHare/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ShinoharaHare",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6054/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6054/timeline | null | completed | null | null | 32.723056 | 1,593 |
https://api.github.com/repos/huggingface/datasets/issues/6053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6053/comments | https://api.github.com/repos/huggingface/datasets/issues/6053/events | https://github.com/huggingface/datasets/issues/6053 | 1,812,635,902 | I_kwDODunzps5sCqD- | 6,053 | Change package name from "datasets" to something less generic | {
"avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4",
"events_url": "https://api.github.com/users/jack-jjm/events{/privacy}",
"followers_url": "https://api.github.com/users/jack-jjm/followers",
"following_url": "https://api.github.com/users/jack-jjm/following{/other_user}",
"gists_url": "https://api.github.com/users/jack-jjm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jack-jjm",
"id": 2124157,
"login": "jack-jjm",
"node_id": "MDQ6VXNlcjIxMjQxNTc=",
"organizations_url": "https://api.github.com/users/jack-jjm/orgs",
"received_events_url": "https://api.github.com/users/jack-jjm/received_events",
"repos_url": "https://api.github.com/users/jack-jjm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jack-jjm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jack-jjm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jack-jjm",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"This would break a lot of existing code, so we can't really do this.",
"I encountered this issue while working on a large project with 6+ years history. We have a submodule named datasets in the backend, and face a big challenge incorporating huggingface datasets into the project, especially considering django a... | 2023-07-19T19:53:28Z | 2024-11-20T21:22:36Z | 2023-10-03T16:04:09Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude.
My preference would be a pattern like what you get with all the other big libraries like numpy or pandas:
```
import huggingface as hf
# hf.transformers, hf.datasets, hf.evaluate
```
or things like
```
import huggingface.transformers as tf
# tf.load_model(), etc
```
If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on.
I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this.
Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name".
Sister issues:
- [transformers](https://github.com/huggingface/transformers/issues/24934)
- **datasets**
- [evaluate](https://github.com/huggingface/evaluate/issues/476)
### Motivation
Not taking up package names the user is likely to want to use.
### Your contribution
No - more a matter of internal discussion among core library authors. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 7,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6053/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6053/timeline | null | not_planned | null | null | 1,820.178056 | 1,594 |
https://api.github.com/repos/huggingface/datasets/issues/6051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6051/comments | https://api.github.com/repos/huggingface/datasets/issues/6051/events | https://github.com/huggingface/datasets/issues/6051 | 1,811,549,650 | I_kwDODunzps5r-g3S | 6,051 | Skipping shard in the remote repo and resume upload | {
"avatar_url": "https://avatars.githubusercontent.com/u/9029817?v=4",
"events_url": "https://api.github.com/users/rs9000/events{/privacy}",
"followers_url": "https://api.github.com/users/rs9000/followers",
"following_url": "https://api.github.com/users/rs9000/following{/other_user}",
"gists_url": "https://api.github.com/users/rs9000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rs9000",
"id": 9029817,
"login": "rs9000",
"node_id": "MDQ6VXNlcjkwMjk4MTc=",
"organizations_url": "https://api.github.com/users/rs9000/orgs",
"received_events_url": "https://api.github.com/users/rs9000/received_events",
"repos_url": "https://api.github.com/users/rs9000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rs9000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rs9000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rs9000",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! `_select_contiguous` fetches a (zero-copy) slice of the dataset's Arrow table to build a shard, so I don't think this part is the problem. To me, the issue seems to be the step where we embed external image files' bytes (a lot of file reads). You can use `.map` with multiprocessing to perform this step before ... | 2023-07-19T09:25:26Z | 2023-07-20T18:16:01Z | 2023-07-20T18:16:00Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enumerate(itertools.chain([first_shard], shards_iter)),
desc="Pushing dataset shards to the dataset hub",
total=num_shards,
disable=not logging.is_progress_bar_enabled(),
):
shard_path_in_repo = path_in_repo(index, shard)
# Upload a shard only if it doesn't already exist in the repository
if shard_path_in_repo not in data_files:
```
In particular, iterating the generator is slow during the call:
```python
self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
```
I wonder if it is possible to avoid calling this function for shards that are already uploaded and just start from the correct shard index.
### Steps to reproduce the bug
1. Start the upload
```python
dataset = load_dataset("imagefolder", data_dir=DATA_DIR, split="train", drop_labels=True)
dataset.push_to_hub("repo/name")
```
2. Stop and restart the upload after hundreds of shards
### Expected behavior
Skip the uploaded shards faster.
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
| {
"avatar_url": "https://avatars.githubusercontent.com/u/9029817?v=4",
"events_url": "https://api.github.com/users/rs9000/events{/privacy}",
"followers_url": "https://api.github.com/users/rs9000/followers",
"following_url": "https://api.github.com/users/rs9000/following{/other_user}",
"gists_url": "https://api.github.com/users/rs9000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rs9000",
"id": 9029817,
"login": "rs9000",
"node_id": "MDQ6VXNlcjkwMjk4MTc=",
"organizations_url": "https://api.github.com/users/rs9000/orgs",
"received_events_url": "https://api.github.com/users/rs9000/received_events",
"repos_url": "https://api.github.com/users/rs9000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rs9000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rs9000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rs9000",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6051/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6051/timeline | null | completed | null | null | 32.842778 | 1,596 |
https://api.github.com/repos/huggingface/datasets/issues/6048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6048/comments | https://api.github.com/repos/huggingface/datasets/issues/6048/events | https://github.com/huggingface/datasets/issues/6048 | 1,809,629,346 | I_kwDODunzps5r3MCi | 6,048 | when i use datasets.load_dataset, i encounter the http connect error! | {
"avatar_url": "https://avatars.githubusercontent.com/u/137855591?v=4",
"events_url": "https://api.github.com/users/yangy1992/events{/privacy}",
"followers_url": "https://api.github.com/users/yangy1992/followers",
"following_url": "https://api.github.com/users/yangy1992/following{/other_user}",
"gists_url": "https://api.github.com/users/yangy1992/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangy1992",
"id": 137855591,
"login": "yangy1992",
"node_id": "U_kgDOCDeCZw",
"organizations_url": "https://api.github.com/users/yangy1992/orgs",
"received_events_url": "https://api.github.com/users/yangy1992/received_events",
"repos_url": "https://api.github.com/users/yangy1992/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangy1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangy1992/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangy1992",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The `audiofolder` loader is not available in version `2.3.2`, hence the error. Please run the `pip install -U datasets` command to update the `datasets` installation to make `load_dataset(\"audiofolder\", ...)` work."
] | 2023-07-18T10:16:34Z | 2023-07-18T16:18:39Z | 2023-07-18T16:18:39Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
--------------------------------------------------
My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet.
### Steps to reproduce the bug
1
### Expected behavior
no error when i use the load_dataset func
### Environment info
python=3.8.15 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6048/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6048/timeline | null | completed | null | null | 6.034722 | 1,598 |
https://api.github.com/repos/huggingface/datasets/issues/6039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6039/comments | https://api.github.com/repos/huggingface/datasets/issues/6039/events | https://github.com/huggingface/datasets/issues/6039 | 1,806,508,451 | I_kwDODunzps5rrSGj | 6,039 | Loading column subset from parquet file produces error since version 2.13 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4",
"events_url": "https://api.github.com/users/kklemon/events{/privacy}",
"followers_url": "https://api.github.com/users/kklemon/followers",
"following_url": "https://api.github.com/users/kklemon/following{/other_user}",
"gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kklemon",
"id": 1430243,
"login": "kklemon",
"node_id": "MDQ6VXNlcjE0MzAyNDM=",
"organizations_url": "https://api.github.com/users/kklemon/orgs",
"received_events_url": "https://api.github.com/users/kklemon/received_events",
"repos_url": "https://api.github.com/users/kklemon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kklemon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kklemon",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2023-07-16T09:13:07Z | 2023-07-24T14:35:04Z | 2023-07-24T14:35:04Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
`load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/usr/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables
raise ValueError(
ValueError: Tried to load parquet data with columns '['sepal_length']' with mismatching features '{'sepal_length': Value(dtype='float64', id=None), 'sepal_width': Value(dtype='float64', id=None), 'petal_length': Value(dtype='float64', id=None), 'petal_width': Value(dtype='float64', id=None), 'species': Value(dtype='string', id=None)}'
```
This seems to occur because `datasets` is checking whether the columns in the schema exactly match the provided list of columns, instead of whether they are a subset.
### Steps to reproduce the bug
```python
# Prepare some sample data
import pandas as pd
iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
iris.to_parquet('iris.parquet')
# ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
print(iris.columns)
# Load data with datasets
from datasets import load_dataset
# Load full parquet file
dataset = load_dataset('parquet', data_files='iris.parquet')
# Load column subset; throws error for datasets>=2.13
dataset = load_dataset('parquet', data_files='iris.parquet', columns=['sepal_length'])
```
### Expected behavior
No error should be thrown and the given column subset should be loaded.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6039/timeline | null | completed | null | null | 197.365833 | 1,607 |
https://api.github.com/repos/huggingface/datasets/issues/6038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6038/comments | https://api.github.com/repos/huggingface/datasets/issues/6038/events | https://github.com/huggingface/datasets/issues/6038 | 1,805,960,244 | I_kwDODunzps5rpMQ0 | 6,038 | File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'? | {
"avatar_url": "https://avatars.githubusercontent.com/u/53547009?v=4",
"events_url": "https://api.github.com/users/BaiMeiyingxue/events{/privacy}",
"followers_url": "https://api.github.com/users/BaiMeiyingxue/followers",
"following_url": "https://api.github.com/users/BaiMeiyingxue/following{/other_user}",
"gists_url": "https://api.github.com/users/BaiMeiyingxue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BaiMeiyingxue",
"id": 53547009,
"login": "BaiMeiyingxue",
"node_id": "MDQ6VXNlcjUzNTQ3MDA5",
"organizations_url": "https://api.github.com/users/BaiMeiyingxue/orgs",
"received_events_url": "https://api.github.com/users/BaiMeiyingxue/received_events",
"repos_url": "https://api.github.com/users/BaiMeiyingxue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BaiMeiyingxue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BaiMeiyingxue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BaiMeiyingxue",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Instead of writing the loading script, you can use the built-in loader to [load JSON files](https://huggingface.co/docs/datasets/loading#json):\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"json\", data_files={\"train\": os.path.join(data_dir[\"train\"]), \"dev\": os.path.join(data_dir[\... | 2023-07-15T07:58:08Z | 2023-07-24T11:54:15Z | 2023-07-24T11:54:15Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
# urls = _URLS[self.config.name]
data_dir = dl_manager.download_and_extract(_URLs)
print(data_dir)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["train"]),
"split": "train",
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["dev"]),
"split": "dev",
},
),
]
```
and error occured
```
Traceback (most recent call last):
File "/home/zhizhou/data1/zhanghao/huggingface/FineTuning_Transformer/load_local_dataset.py", line 2, in <module>
dataset = load_dataset("./QA_script.py",data_files='/home/zhizhou/.cache/huggingface/datasets/conversatiom_corps/part_file.json')
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare
if str(split_generator.split_info.name).lower() == "all":
AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
```
Could you help me? | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6038/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6038/timeline | null | completed | null | null | 219.935278 | 1,608 |
https://api.github.com/repos/huggingface/datasets/issues/6037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6037/comments | https://api.github.com/repos/huggingface/datasets/issues/6037/events | https://github.com/huggingface/datasets/issues/6037 | 1,805,887,184 | I_kwDODunzps5ro6bQ | 6,037 | Documentation links to examples are broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"These docs are outdated (version 1.2.1 is over two years old). Please refer to [this](https://huggingface.co/docs/datasets/dataset_script) version instead.\r\n\r\nInitially, we hosted datasets in this repo, but now you can find them [on the HF Hub](https://huggingface.co/datasets) (e.g. the [`ag_news`](https://hug... | 2023-07-15T04:54:50Z | 2023-07-17T22:35:14Z | 2023-07-17T15:10:32Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
The links at the bottom of [add_dataset](https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html) to examples of specific datasets are all broken, for example
- text classification: [ag_news](https://github.com/huggingface/datasets/blob/master/datasets/ag_news/ag_news.py) (original data are in csv files)
### Steps to reproduce the bug
Click on links to examples from latest documentation
### Expected behavior
Links should be up to date - it might be more stable to link to https://huggingface.co/datasets/ag_news/blob/main/ag_news.py
### Environment info
dataset v1.2.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6037/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6037/timeline | null | completed | null | null | 58.261667 | 1,609 |
https://api.github.com/repos/huggingface/datasets/issues/6034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6034/comments | https://api.github.com/repos/huggingface/datasets/issues/6034/events | https://github.com/huggingface/datasets/issues/6034 | 1,804,501,361 | I_kwDODunzps5rjoFx | 6,034 | load_dataset hangs on WSL | {
"avatar_url": "https://avatars.githubusercontent.com/u/20140522?v=4",
"events_url": "https://api.github.com/users/Andy-Zhou2/events{/privacy}",
"followers_url": "https://api.github.com/users/Andy-Zhou2/followers",
"following_url": "https://api.github.com/users/Andy-Zhou2/following{/other_user}",
"gists_url": "https://api.github.com/users/Andy-Zhou2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Andy-Zhou2",
"id": 20140522,
"login": "Andy-Zhou2",
"node_id": "MDQ6VXNlcjIwMTQwNTIy",
"organizations_url": "https://api.github.com/users/Andy-Zhou2/orgs",
"received_events_url": "https://api.github.com/users/Andy-Zhou2/received_events",
"repos_url": "https://api.github.com/users/Andy-Zhou2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Andy-Zhou2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andy-Zhou2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Andy-Zhou2",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Even if a dataset is cached, we still make requests to check whether the cache is up-to-date. [This](https://huggingface.co/docs/datasets/v2.13.1/en/loading#offline) section in the docs explains how to avoid them and directly load the cached version.",
"Thanks - that works! However it doesn't resolve the origina... | 2023-07-14T09:03:10Z | 2023-07-14T14:48:29Z | 2023-07-14T14:48:29Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not sure why socket is needed. ([profiler result](https://ibb.co/0Btbbp8))
It only happens on WSL for me. It works for native Windows and my MacBook. (cache quickly recognized and loaded within a second).
### Steps to reproduce the bug
I am using Ubuntu 22.04.2 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64)
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
>>> import datasets
>>> datasets.load_dataset('ai2_arc', 'ARC-Challenge') # hangs for 5/10/15 minutes
### Expected behavior
cache quickly recognized and loaded within a second
### Environment info
Please let me know if I should provide more environment information. | {
"avatar_url": "https://avatars.githubusercontent.com/u/20140522?v=4",
"events_url": "https://api.github.com/users/Andy-Zhou2/events{/privacy}",
"followers_url": "https://api.github.com/users/Andy-Zhou2/followers",
"following_url": "https://api.github.com/users/Andy-Zhou2/following{/other_user}",
"gists_url": "https://api.github.com/users/Andy-Zhou2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Andy-Zhou2",
"id": 20140522,
"login": "Andy-Zhou2",
"node_id": "MDQ6VXNlcjIwMTQwNTIy",
"organizations_url": "https://api.github.com/users/Andy-Zhou2/orgs",
"received_events_url": "https://api.github.com/users/Andy-Zhou2/received_events",
"repos_url": "https://api.github.com/users/Andy-Zhou2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Andy-Zhou2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andy-Zhou2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Andy-Zhou2",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6034/timeline | null | completed | null | null | 5.755278 | 1,612 |
https://api.github.com/repos/huggingface/datasets/issues/6033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6033/comments | https://api.github.com/repos/huggingface/datasets/issues/6033/events | https://github.com/huggingface/datasets/issues/6033 | 1,804,482,051 | I_kwDODunzps5rjjYD | 6,033 | `map` function doesn't fully utilize `input_columns`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kwonmha",
"id": 8953934,
"login": "kwonmha",
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kwonmha",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2023-07-14T08:49:28Z | 2023-07-14T09:16:04Z | 2023-07-14T09:16:04Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select columns.
It preserves existing columns.
The main cause is `update` function of `dictionary` type `transformed_batch`.
https://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691
`transformed_batch` gets all the columns by `transformed_batch = dict(batch)`.
Even `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`.
I think it should take a new dictionary with columns in `input_columns` like this:
```
# transformed_batch = dict(batch)
# transformed_batch.update(self.function(*function_args, **self.fn_kwargs)
# This is what I think correct.
transformed_batch = self.function(*function_args, **self.fn_kwargs)
```
Let me know how to use `input_columns`.
### Steps to reproduce the bug
Described all above.
### Expected behavior
Described all above.
### Environment info
datasets: 2.12
python: 3.8 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kwonmha",
"id": 8953934,
"login": "kwonmha",
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kwonmha",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6033/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6033/timeline | null | completed | null | null | 0.443333 | 1,613 |
https://api.github.com/repos/huggingface/datasets/issues/6031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6031/comments | https://api.github.com/repos/huggingface/datasets/issues/6031/events | https://github.com/huggingface/datasets/issues/6031 | 1,804,183,858 | I_kwDODunzps5riaky | 6,031 | Argument type for map function changes when using `input_columns` for `IterableDataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kwonmha",
"id": 8953934,
"login": "kwonmha",
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kwonmha",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Yes, this is intended."
] | 2023-07-14T05:11:14Z | 2023-07-14T14:44:15Z | 2023-07-14T14:44:15Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I wrote `tokenize(examples)` function as an argument for `map` function for `IterableDataset`.
It process dictionary type `examples` as a parameter.
It is used in `train_dataset = train_dataset.map(tokenize, batched=True)`
No error is raised.
And then, I found some unnecessary keys and values in `examples` so I added `input_columns` argument to `map` function to select keys and values.
It gives me an error saying
```
TypeError: tokenize() takes 1 positional argument but 3 were given.
```
The code below matters.
https://github.com/huggingface/datasets/blob/406b2212263c0d33f267e35b917f410ff6b3bc00/src/datasets/iterable_dataset.py#L687
For example, `inputs = {"a":1, "b":2, "c":3}`.
If `self.input_coluns` is `None`,
`inputs` is a dictionary type variable and `function_args` becomes a `list` of a single `dict` variable.
`function_args` becomes `[{"a":1, "b":2, "c":3}]`
Otherwise, lets say `self.input_columns = ["a", "c"]`
`[inputs[col] for col in self.input_columns]` results in `[1, 3]`.
I think it should be `[{"a":1, "c":3}]`.
I want to ask if the resulting format is intended.
Maybe I can modify `tokenize()` to have 2 parameters in this case instead of having 1 dictionary.
But this is confusing to me.
Or it should be fixed as `[{col:inputs[col] for col in self.input_columns}]`
### Steps to reproduce the bug
Run `map` function of `IterableDataset` with `input_columns` argument.
### Expected behavior
`function_args` looks better to have same format.
I think it should be `[{"a":1, "c":3}]`.
### Environment info
dataset version: 2.12
python: 3.8 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6031/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6031/timeline | null | completed | null | null | 9.550278 | 1,615 |
https://api.github.com/repos/huggingface/datasets/issues/6025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6025/comments | https://api.github.com/repos/huggingface/datasets/issues/6025/events | https://github.com/huggingface/datasets/issues/6025 | 1,801,852,601 | I_kwDODunzps5rZha5 | 6,025 | Using a dataset for a use other than it was intended for. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/surya-narayanan",
"id": 17240858,
"login": "surya-narayanan",
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/surya-narayanan",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I've opened a PR with a fix. In the meantime, you can avoid the error by deleting `task_templates` with `dataset.info.task_templates = None` before the `interleave_datasets` call.\r\n` "
] | 2023-07-12T22:33:17Z | 2023-07-13T13:57:36Z | 2023-07-13T13:57:36Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason?
Here is the full stacktrace
```
File "/home/suryahari/Vornoi/tryage-handoff-other-datasets.py", line 276, in create_dataloaders
dataset = interleave_datasets(dsfold, stopping_strategy="all_exhausted")
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py", line 134, in interleave_datasets
return _interleave_iterable_datasets(
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1833, in _interleave_iterable_datasets
info = DatasetInfo.from_merge([d.info for d in datasets])
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in from_merge
dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in <listcomp>
dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 378, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File "<string>", line 20, in __init__
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 208, in __post_init__
self.task_templates = [
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 209, in <listcomp>
template.align_with_features(self.features) for template in (self.task_templates)
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/tasks/text_classification.py", line 20, in align_with_features
raise ValueError(f"Column {self.label_column} is not present in features.")
ValueError: Column label is not present in features.
```
### Steps to reproduce the bug
Delete the column `labels` from the `rotten_tomatoes` dataset. Try to interleave it with other datasets.
### Expected behavior
Should let me use the dataset with just the `text` field
### Environment info
latest datasets library? I don't think this was an issue in earlier versions. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6025/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6025/timeline | null | completed | null | null | 15.405278 | 1,621 |
https://api.github.com/repos/huggingface/datasets/issues/6022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6022/comments | https://api.github.com/repos/huggingface/datasets/issues/6022/events | https://github.com/huggingface/datasets/issues/6022 | 1,800,092,589 | I_kwDODunzps5rSzut | 6,022 | Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int' | {
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix."
] | 2023-07-12T03:20:17Z | 2023-07-12T16:18:06Z | 2023-07-12T16:18:05Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When mapping some datasets with `batched=True`, datasets may raise an exeception:
```python
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1328, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3483, in _map_single
writer.write_batch(batch)
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_writer.py", line 549, in write_batch
array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 2063, in cast_array_to_feature
return feature.cast_storage(array)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/features/features.py", line 1098, in cast_storage
if min_max["max"] >= self.num_classes:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/t1.py", line 33, in <module>
ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 850, in map
{
File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 851, in <dictcomp>
k: dataset.map(
^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 577, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 542, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3179, in map
for rank, done, content in iflatmap_unordered(
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 774, in get
raise self._value
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
```
### Steps to reproduce the bug
1. Checkout the latest main of datasets.
2. Run the code:
```python
from datasets import load_dataset
def transforms(examples):
# examples["pixel_values"] = [image.convert("RGB").resize((100, 100)) for image in examples["image"]]
return examples
ds = load_dataset("scene_parse_150")
ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)
print(ds)
```
### Expected behavior
map without exception.
### Environment info
Datasets: https://github.com/huggingface/datasets/commit/b8067c0262073891180869f700ebef5ac3dc5cce
Python: 3.11.4
System: Macos | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6022/timeline | null | completed | null | null | 12.963333 | 1,624 |
https://api.github.com/repos/huggingface/datasets/issues/6017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6017/comments | https://api.github.com/repos/huggingface/datasets/issues/6017/events | https://github.com/huggingface/datasets/issues/6017 | 1,799,309,132 | I_kwDODunzps5rP0dM | 6,017 | Switch to huggingface_hub's HfFileSystem | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [] | 2023-07-11T16:24:40Z | 2023-07-17T17:01:01Z | 2023-07-17T17:01:01Z | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases
related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6017/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6017/timeline | null | completed | null | null | 144.605833 | 1,629 |
https://api.github.com/repos/huggingface/datasets/issues/6014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6014/comments | https://api.github.com/repos/huggingface/datasets/issues/6014/events | https://github.com/huggingface/datasets/issues/6014 | 1,798,213,816 | I_kwDODunzps5rLpC4 | 6,014 | Request to Share/Update Dataset Viewer Code | {
"avatar_url": "https://avatars.githubusercontent.com/u/105081034?v=4",
"events_url": "https://api.github.com/users/lilyorlilypad/events{/privacy}",
"followers_url": "https://api.github.com/users/lilyorlilypad/followers",
"following_url": "https://api.github.com/users/lilyorlilypad/following{/other_user}",
"gists_url": "https://api.github.com/users/lilyorlilypad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lilyorlilypad",
"id": 105081034,
"login": "lilyorlilypad",
"node_id": "U_kgDOBkNoyg",
"organizations_url": "https://api.github.com/users/lilyorlilypad/orgs",
"received_events_url": "https://api.github.com/users/lilyorlilypad/received_events",
"repos_url": "https://api.github.com/users/lilyorlilypad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lilyorlilypad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lilyorlilypad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lilyorlilypad",
"user_view_type": "public"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | [
"Hi ! The huggingface/dataset-viewer code was not maintained anymore because we switched to a new dataset viewer that is deployed available for each dataset the Hugging Face website.\r\n\r\nWhat are you using this old repository for ?",
"I think these parts are outdated:\r\n\r\n* https://github.com/huggingface/da... | 2023-07-11T06:36:09Z | 2024-07-20T07:29:08Z | 2023-09-25T12:01:17Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
Overview:
The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute.
Request:
I kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code.
Thank you for considering this request, and I look forward to your response. | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6014/timeline | null | completed | null | null | 1,829.418889 | 1,632 |
https://api.github.com/repos/huggingface/datasets/issues/6011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6011/comments | https://api.github.com/repos/huggingface/datasets/issues/6011/events | https://github.com/huggingface/datasets/issues/6011 | 1,795,296,568 | I_kwDODunzps5rAg04 | 6,011 | Documentation: wiki_dpr Dataset has no metric_type for Faiss Index | {
"avatar_url": "https://avatars.githubusercontent.com/u/29335344?v=4",
"events_url": "https://api.github.com/users/YichiRockyZhang/events{/privacy}",
"followers_url": "https://api.github.com/users/YichiRockyZhang/followers",
"following_url": "https://api.github.com/users/YichiRockyZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/YichiRockyZhang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YichiRockyZhang",
"id": 29335344,
"login": "YichiRockyZhang",
"node_id": "MDQ6VXNlcjI5MzM1MzQ0",
"organizations_url": "https://api.github.com/users/YichiRockyZhang/orgs",
"received_events_url": "https://api.github.com/users/YichiRockyZhang/received_events",
"repos_url": "https://api.github.com/users/YichiRockyZhang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YichiRockyZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YichiRockyZhang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YichiRockyZhang",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! You can do `ds.get_index(\"embeddings\").faiss_index.metric_type` to get the metric type and then match the result with the FAISS metric [enum](https://github.com/facebookresearch/faiss/blob/43d86e30736ede853c384b24667fc3ab897d6ba9/faiss/MetricType.h#L22-L36) (should be L2).",
"Ah! Thank you for pointing thi... | 2023-07-09T08:30:19Z | 2023-07-11T03:02:36Z | 2023-07-11T03:02:36Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
After loading `wiki_dpr` using:
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None
```
the index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.
### Steps to reproduce the bug
System: Python 3.9.16, Transformers 4.30.2, WSL
After loading `wiki_dpr` using:
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None
```
the index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.
```py
from transformers import DPRQuestionEncoder, DPRContextEncoder, DPRQuestionEncoderTokenizer, DPRContextEncoderTokenizer
tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base")
encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base")
def encode_question(query, tokenizer=tokenizer, encoder=encoder):
inputs = tokenizer(query, return_tensors='pt')
question_embedding = encoder(**inputs)[0].detach().numpy()
return question_embedding
def get_knn(query, k=5, tokenizer=tokenizer, encoder=encoder, verbose=False):
enc_question = encode_question(query, tokenizer, encoder)
topk_results = ds.get_nearest_examples(index_name='embeddings',
query=enc_question,
k=k)
a = torch.tensor(enc_question[0]).reshape(768)
b = torch.tensor(topk_results.examples['embeddings'][0])
print(a.shape, b.shape)
print(torch.dot(a, b))
print((a-b).pow(2).sum())
return topk_results
```
The [FAISS documentation](https://github.com/facebookresearch/faiss/wiki/MetricType-and-distances) suggests the metric is usually L2 distance (without the square root) or the inner product. I compute both for the sample query:
```py
query = """ it catapulted into popular culture along with a line of action figures and other toys by Bandai.[2] By 2001, the media franchise had generated over $6 billion in toy sales.
Despite initial criticism that its action violence targeted child audiences, the franchise has been commercially successful."""
get_knn(query,k=5)
```
Here, I get dot product of 80.6020 and L2 distance of 77.6616 and
```py
NearestExamplesResults(scores=array([76.20431 , 75.312416, 74.945404, 74.866394, 74.68506 ],
dtype=float32), examples={'id': ['3081096', '2004811', '8908258', '9594124', '286575'], 'text': ['actors, resulting in the "Power Rangers" franchise which has continued since then into sequel TV series (with "Power Rangers Beast Morphers" set to premiere in 2019), comic books, video games, and three feature films, with a further cinematic universe planned. Following from the success of "Power Rangers", Saban acquired the rights to more of Toei\'s library, creating "VR Troopers" and "Big Bad Beetleborgs" from several Metal Hero Series shows and "Masked Rider" from Kamen Rider Series footage. DIC Entertainment joined this boom by acquiring the rights to "Gridman the Hyper Agent" and turning it into "Superhuman Samurai Syber-Squad". In 2002,',
```
Doing `k=1` indicates the higher the outputted number, the better the match, so the metric should not be L2 distance. However, my manually computed inner product (80.6) has a discrepancy with the reported (76.2). Perhaps, this has to do with me using the `compressed` embeddings?
### Expected behavior
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # METRIC_INNER_PRODUCT
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29335344?v=4",
"events_url": "https://api.github.com/users/YichiRockyZhang/events{/privacy}",
"followers_url": "https://api.github.com/users/YichiRockyZhang/followers",
"following_url": "https://api.github.com/users/YichiRockyZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/YichiRockyZhang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YichiRockyZhang",
"id": 29335344,
"login": "YichiRockyZhang",
"node_id": "MDQ6VXNlcjI5MzM1MzQ0",
"organizations_url": "https://api.github.com/users/YichiRockyZhang/orgs",
"received_events_url": "https://api.github.com/users/YichiRockyZhang/received_events",
"repos_url": "https://api.github.com/users/YichiRockyZhang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YichiRockyZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YichiRockyZhang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YichiRockyZhang",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6011/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6011/timeline | null | completed | null | null | 42.538056 | 1,635 |
https://api.github.com/repos/huggingface/datasets/issues/6008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6008/comments | https://api.github.com/repos/huggingface/datasets/issues/6008/events | https://github.com/huggingface/datasets/issues/6008 | 1,789,869,344 | I_kwDODunzps5qrz0g | 6,008 | Dataset.from_generator consistently freezes at ~1000 rows | {
"avatar_url": "https://avatars.githubusercontent.com/u/27695722?v=4",
"events_url": "https://api.github.com/users/andreemic/events{/privacy}",
"followers_url": "https://api.github.com/users/andreemic/followers",
"following_url": "https://api.github.com/users/andreemic/following{/other_user}",
"gists_url": "https://api.github.com/users/andreemic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andreemic",
"id": 27695722,
"login": "andreemic",
"node_id": "MDQ6VXNlcjI3Njk1NzIy",
"organizations_url": "https://api.github.com/users/andreemic/orgs",
"received_events_url": "https://api.github.com/users/andreemic/received_events",
"repos_url": "https://api.github.com/users/andreemic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andreemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreemic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andreemic",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Arra... | 2023-07-05T16:06:48Z | 2023-07-10T13:46:39Z | 2023-07-10T13:46:39Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset.
I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset.
Let me know if you have ideas how to resolve it!
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
def gen():
for row in range(10000):
yield {"i": np.random.rand(512, 512, 3)}
Dataset.from_generator(gen)
# -> 90% of the time gets stuck around 1000 rows
```
### Expected behavior
Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 12.0.1
- Pandas version: 1.5.1
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6008/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6008/timeline | null | completed | null | null | 117.664167 | 1,638 |
https://api.github.com/repos/huggingface/datasets/issues/6006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6006/comments | https://api.github.com/repos/huggingface/datasets/issues/6006/events | https://github.com/huggingface/datasets/issues/6006 | 1,788,855,582 | I_kwDODunzps5qn8Ue | 6,006 | NotADirectoryError when loading gigawords | {
"avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4",
"events_url": "https://api.github.com/users/xipq/events{/privacy}",
"followers_url": "https://api.github.com/users/xipq/followers",
"following_url": "https://api.github.com/users/xipq/following{/other_user}",
"gists_url": "https://api.github.com/users/xipq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xipq",
"id": 115634163,
"login": "xipq",
"node_id": "U_kgDOBuRv8w",
"organizations_url": "https://api.github.com/users/xipq/orgs",
"received_events_url": "https://api.github.com/users/xipq/received_events",
"repos_url": "https://api.github.com/users/xipq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xipq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xipq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence."
] | 2023-07-05T06:23:41Z | 2023-07-05T06:31:02Z | 2023-07-05T06:31:01Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
got `NotADirectoryError` whtn loading gigawords dataset
### Steps to reproduce the bug
When running
```
import datasets
datasets.load_dataset('gigaword')
```
Got the following exception:
```bash
Traceback (most recent call last): [0/1862]
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1629, in _prepare_split_single
for key, record in generator:
File "/home/x/.cache/huggingface/modules/datasets_modules/datasets/gigaword/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b
64efb424b6/gigaword.py", line 115, in _generate_examples
with open(src_path, encoding="utf-8") as f_d, open(tgt_path, encoding="utf-8") as f_s:
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/streaming.py", line 71, in wrapper
return function(*args, use_auth_token=use_auth_token, **kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/download/streaming_download_manager.py", line 493, in xope
n
return open(main_hop, mode, *args, **kwargs)
NotADirectoryError: [Errno 20] Not a directory: '/home/x/.cache/huggingface/datasets/downloads/6da52431bb5124d90cf51a0187d2dbee9046e
89780c4be7599794a4f559048ec/org_data/train.src.txt'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "gigaword.py", line 38, in <module>
main()
File "gigaword.py", line 35, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/home/x/MICL/preprocess/fewshot_gym_dataset.py", line 199, in generate_k_shot_data
dataset = self.load_dataset()
File "gigaword.py", line 29, in load_dataset
return datasets.load_dataset('gigaword')
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1508, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1665, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
Download and process the dataset successfully
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
| {
"avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4",
"events_url": "https://api.github.com/users/xipq/events{/privacy}",
"followers_url": "https://api.github.com/users/xipq/followers",
"following_url": "https://api.github.com/users/xipq/following{/other_user}",
"gists_url": "https://api.github.com/users/xipq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xipq",
"id": 115634163,
"login": "xipq",
"node_id": "U_kgDOBuRv8w",
"organizations_url": "https://api.github.com/users/xipq/orgs",
"received_events_url": "https://api.github.com/users/xipq/received_events",
"repos_url": "https://api.github.com/users/xipq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xipq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xipq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6006/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6006/timeline | null | completed | null | null | 0.122222 | 1,640 |
https://api.github.com/repos/huggingface/datasets/issues/5999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5999/comments | https://api.github.com/repos/huggingface/datasets/issues/5999/events | https://github.com/huggingface/datasets/issues/5999 | 1,781,851,513 | I_kwDODunzps5qNOV5 | 5,999 | Getting a 409 error while loading xglue dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/45713796?v=4",
"events_url": "https://api.github.com/users/Praful932/events{/privacy}",
"followers_url": "https://api.github.com/users/Praful932/followers",
"following_url": "https://api.github.com/users/Praful932/following{/other_user}",
"gists_url": "https://api.github.com/users/Praful932/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Praful932",
"id": 45713796,
"login": "Praful932",
"node_id": "MDQ6VXNlcjQ1NzEzNzk2",
"organizations_url": "https://api.github.com/users/Praful932/orgs",
"received_events_url": "https://api.github.com/users/Praful932/received_events",
"repos_url": "https://api.github.com/users/Praful932/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Praful932/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Praful932/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Praful932",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @Praful932.\r\n\r\nLet's continue the conversation on the Hub: https://huggingface.co/datasets/xglue/discussions/5"
] | 2023-06-30T04:13:54Z | 2023-06-30T05:57:23Z | 2023-06-30T05:57:22Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Unable to load xglue dataset
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("xglue", "ntg")
```
> ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409)
### Expected behavior
Expected the dataset to load
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5999/timeline | null | completed | null | null | 1.724444 | 1,647 |
https://api.github.com/repos/huggingface/datasets/issues/5998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5998/comments | https://api.github.com/repos/huggingface/datasets/issues/5998/events | https://github.com/huggingface/datasets/issues/5998 | 1,781,805,018 | I_kwDODunzps5qNC_a | 5,998 | The current implementation has a potential bug in the sort method | {
"avatar_url": "https://avatars.githubusercontent.com/u/22192665?v=4",
"events_url": "https://api.github.com/users/wangyuxinwhy/events{/privacy}",
"followers_url": "https://api.github.com/users/wangyuxinwhy/followers",
"following_url": "https://api.github.com/users/wangyuxinwhy/following{/other_user}",
"gists_url": "https://api.github.com/users/wangyuxinwhy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wangyuxinwhy",
"id": 22192665,
"login": "wangyuxinwhy",
"node_id": "MDQ6VXNlcjIyMTkyNjY1",
"organizations_url": "https://api.github.com/users/wangyuxinwhy/orgs",
"received_events_url": "https://api.github.com/users/wangyuxinwhy/received_events",
"repos_url": "https://api.github.com/users/wangyuxinwhy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wangyuxinwhy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangyuxinwhy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wangyuxinwhy",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @wangyuxinwhy. "
] | 2023-06-30T03:16:57Z | 2023-06-30T14:21:03Z | 2023-06-30T14:11:25Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
In the sort method,here's a piece of code
```python
# column_names: Union[str, Sequence_[str]]
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
column_names = [column_names]
```
I get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed.
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
Of course, after I modified the tuple into a list, everything worked fine
Change the code to the following so there will be no problem
```python
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
if isinstance(column_names, str):
column_names = [column_names]
else:
column_names = list(column_names)
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
### Expected behavior
Passing tuple into column_names should be equivalent to passing list
### Environment info
- `datasets` version: 2.13.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5998/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5998/timeline | null | completed | null | null | 10.907778 | 1,648 |
https://api.github.com/repos/huggingface/datasets/issues/5993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5993/comments | https://api.github.com/repos/huggingface/datasets/issues/5993/events | https://github.com/huggingface/datasets/issues/5993 | 1,776,643,555 | I_kwDODunzps5p5W3j | 5,993 | ValueError: Table schema does not match schema used to create file | {
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"We'll do a new release of `datasets` soon to make the fix available :)\r\n\r\nIn the meantime you can use `datasets` from source (main)",
"Thank you very much @lhoestq ! 🚀 "
] | 2023-06-27T10:54:07Z | 2023-06-27T15:36:42Z | 2023-06-27T15:32:44Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_parquet("demo.parquet")
```
```shell
>>>
ValueError: Table schema does not match schema used to create file:
table:
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 vs.
file:
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
```
---
I think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it.
```python
ds.features.arrow_schema
>>>
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
ds.data.schema
>>>
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53
```
So when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https://github.com/apache/arrow/blob/11b140a734a516e436adaddaeb35d23f30dcce44/python/pyarrow/parquet/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written 🙌
https://github.com/huggingface/datasets/blob/6ed837325cb539a5deb99129e5ad181d0269e050/src/datasets/io/parquet.py#L139-L141
### Expected behavior
The dataset gets successfully saved as parquet.
*In the same way as it does if saving it as csv:
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_csv("demo.csv")
```
### Environment info
`python==3.11`
`datasets==2.13.1`
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5993/timeline | null | completed | null | null | 4.643611 | 1,653 |
https://api.github.com/repos/huggingface/datasets/issues/5988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5988/comments | https://api.github.com/repos/huggingface/datasets/issues/5988/events | https://github.com/huggingface/datasets/issues/5988 | 1,773,257,828 | I_kwDODunzps5pscRk | 5,988 | ConnectionError: Couldn't reach dataset_infos.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/20674868?v=4",
"events_url": "https://api.github.com/users/yulingao/events{/privacy}",
"followers_url": "https://api.github.com/users/yulingao/followers",
"following_url": "https://api.github.com/users/yulingao/following{/other_user}",
"gists_url": "https://api.github.com/users/yulingao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yulingao",
"id": 20674868,
"login": "yulingao",
"node_id": "MDQ6VXNlcjIwNjc0ODY4",
"organizations_url": "https://api.github.com/users/yulingao/orgs",
"received_events_url": "https://api.github.com/users/yulingao/received_events",
"repos_url": "https://api.github.com/users/yulingao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yulingao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yulingao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yulingao",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Unfortunately, I can't reproduce the error. What does the following code return for you?\r\n```python\r\nimport requests\r\nfrom huggingface_hub import hf_hub_url\r\nr = requests.get(hf_hub_url(\"codeparrot/codeparrot-clean-train\", \"dataset_infos.json\", repo_type=\"dataset\"))\r\n```\r\n\r\nAlso, can you provid... | 2023-06-25T12:39:31Z | 2023-07-07T13:20:57Z | 2023-07-07T13:20:57Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I'm trying to load codeparrot/codeparrot-clean-train, but get the following error:
ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))))
### Steps to reproduce the bug
train_data = load_dataset('codeparrot/codeparrot-clean-train', split='train')
### Expected behavior
download the dataset
### Environment info
centos7 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5988/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5988/timeline | null | completed | null | null | 288.690556 | 1,657 |
https://api.github.com/repos/huggingface/datasets/issues/5987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5987/comments | https://api.github.com/repos/huggingface/datasets/issues/5987/events | https://github.com/huggingface/datasets/issues/5987 | 1,773,047,909 | I_kwDODunzps5prpBl | 5,987 | Why max_shard_size is not supported in load_dataset and passed to download_and_prepare | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Can you explain your use case for `max_shard_size`? \r\n\r\nOn some systems, there is a limit to the size of a memory-mapped file, so we could consider exposing this parameter in `load_dataset`.",
"In my use case, users may choose a proper size to balance the cost and benefit of using large shard size. (On azure... | 2023-06-25T04:19:13Z | 2023-06-29T16:06:08Z | 2023-06-29T16:06:08Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.
### Steps to reproduce the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
### Expected behavior
Users can define the max shard size.
### Environment info
datasets==2.13.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5987/timeline | null | completed | null | null | 107.781944 | 1,658 |
https://api.github.com/repos/huggingface/datasets/issues/5985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5985/comments | https://api.github.com/repos/huggingface/datasets/issues/5985/events | https://github.com/huggingface/datasets/issues/5985 | 1,771,588,158 | I_kwDODunzps5pmEo- | 5,985 | Cannot reuse tokenizer object for dataset map | {
"avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4",
"events_url": "https://api.github.com/users/vikigenius/events{/privacy}",
"followers_url": "https://api.github.com/users/vikigenius/followers",
"following_url": "https://api.github.com/users/vikigenius/following{/other_user}",
"gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vikigenius",
"id": 12724810,
"login": "vikigenius",
"node_id": "MDQ6VXNlcjEyNzI0ODEw",
"organizations_url": "https://api.github.com/users/vikigenius/orgs",
"received_events_url": "https://api.github.com/users/vikigenius/received_events",
"repos_url": "https://api.github.com/users/vikigenius/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vikigenius",
"user_view_type": "public"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | [
"This is a known issue: https://github.com/huggingface/datasets/issues/3847.\r\n\r\nFixing this requires significant work - rewriting the `tokenizers` lib to make them immutable.\r\n\r\nThe current solution is to pass `cache_file_name` to `map` to use that file for caching or calling a tokenizer before `map` (with ... | 2023-06-23T14:45:31Z | 2023-07-21T14:09:14Z | 2023-07-21T14:09:14Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both.
Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same.
But dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change.
### Steps to reproduce the bug
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
t.save_pretrained("tok1")
th1 = hash(dumps(t))
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
t.save_pretrained("tok2")
th2 = hash(dumps(t))
assert th1 == th2 # Assertion Error
```
But if you use just the hash of the object without dumps, the hashes don't change
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
th1 = hash(t) # Just hash no dumps
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
th2 = hash(t) # Just hash no dumps
assert th1 == th2 # This is OK
```
This causes situations such as the following
1. Create a text file like this `yes "This is an example text" | head -n 10000 > lines.txt`
```python
from transformers import AutoTokenizer
import datasets
class TokenizeMapper(object):
"""Mapper for tokenizer.
This is needed because the caching mechanism of HuggingFace does not work on
lambdas. Each time a new lambda will be created by a new process which will
lead to a different hash.
This way we can have a universal mapper object in init and reuse it with the same
hash for each process.
"""
def __init__(self, tokenizer):
"""Initialize the tokenizer."""
self.tokenizer = tokenizer
def __call__(self, examples, **kwargs):
"""Run the mapper."""
texts = examples["text"]
tt = self.tokenizer(texts, max_length=256, padding="max_length", truncation=True)
batch_outputs = {
"input_ids": tt.input_ids,
"attention_mask": tt.attention_mask,
}
return batch_outputs
t = AutoTokenizer.from_pretrained('bert-base-uncased')
mapper = TokenizeMapper(t)
ds = datasets.load_dataset("text", data_files="lines.txt")
mds1 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
mds2 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
```
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Expected behavior
We should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset.
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-6.1.31_1-x86_64-with-glibc2.36
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5985/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5985/timeline | null | completed | null | null | 671.395278 | 1,660 |
https://api.github.com/repos/huggingface/datasets/issues/5982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5982/comments | https://api.github.com/repos/huggingface/datasets/issues/5982/events | https://github.com/huggingface/datasets/issues/5982 | 1,770,333,296 | I_kwDODunzps5phSRw | 5,982 | 404 on Datasets Documentation Page | {
"avatar_url": "https://avatars.githubusercontent.com/u/118509387?v=4",
"events_url": "https://api.github.com/users/kmulka-bloomberg/events{/privacy}",
"followers_url": "https://api.github.com/users/kmulka-bloomberg/followers",
"following_url": "https://api.github.com/users/kmulka-bloomberg/following{/other_user}",
"gists_url": "https://api.github.com/users/kmulka-bloomberg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kmulka-bloomberg",
"id": 118509387,
"login": "kmulka-bloomberg",
"node_id": "U_kgDOBxBPSw",
"organizations_url": "https://api.github.com/users/kmulka-bloomberg/orgs",
"received_events_url": "https://api.github.com/users/kmulka-bloomberg/received_events",
"repos_url": "https://api.github.com/users/kmulka-bloomberg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kmulka-bloomberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kmulka-bloomberg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kmulka-bloomberg",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This wasn’t working for me a bit earlier, but it looks to be back up now",
"We had a minor issue updating the docs after the latest release. It should work now :)."
] | 2023-06-22T20:14:57Z | 2023-06-26T15:45:03Z | 2023-06-26T15:45:03Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
### Environment info
hugginface.co | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5982/timeline | null | completed | null | null | 91.501667 | 1,663 |
https://api.github.com/repos/huggingface/datasets/issues/5981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5981/comments | https://api.github.com/repos/huggingface/datasets/issues/5981/events | https://github.com/huggingface/datasets/issues/5981 | 1,770,310,087 | I_kwDODunzps5phMnH | 5,981 | Only two cores are getting used in sagemaker with pytorch 3.10 kernel | {
"avatar_url": "https://avatars.githubusercontent.com/u/107141022?v=4",
"events_url": "https://api.github.com/users/mmr-crexi/events{/privacy}",
"followers_url": "https://api.github.com/users/mmr-crexi/followers",
"following_url": "https://api.github.com/users/mmr-crexi/following{/other_user}",
"gists_url": "https://api.github.com/users/mmr-crexi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mmr-crexi",
"id": 107141022,
"login": "mmr-crexi",
"node_id": "U_kgDOBmLXng",
"organizations_url": "https://api.github.com/users/mmr-crexi/orgs",
"received_events_url": "https://api.github.com/users/mmr-crexi/received_events",
"repos_url": "https://api.github.com/users/mmr-crexi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mmr-crexi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmr-crexi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mmr-crexi",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I think it's more likely that this issue is related to PyTorch than Datasets, as PyTorch (on import) registers functions to execute when forking a process. Maybe this is the culprit: https://github.com/pytorch/pytorch/issues/99625",
"From reading that ticket, it may be down in mkl? Is it worth hotfixing in the ... | 2023-06-22T19:57:31Z | 2023-10-30T06:17:40Z | 2023-07-24T11:54:52Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field.
We have solved this in our own code by placing the following snippet in the code that is called inside subprocesses:
```os.sched_setaffinity(0, {i for i in range(1000)})```
The problem, as near as we can tell, us that once upon a time, cpu affinity was set using a bitmask ("0xfffff" and the like), and affinity recently changed to a list of processors rather than to using the mask. As such, only processors 1 and 17 are shown to be working in htop.

When running functions via `map`, the above resetting of affinity works to spread across the cores. When using `filter`, however, only two cores are active.
### Steps to reproduce the bug
Repro steps:
1. Create an aws sagemaker instance
2. use the pytorch 3_10 kernel
3. Load a dataset
4. run a filter operation
5. watch as only 2 cores are used when num_proc > 2
6. run a map operation
7. watch as only 2 cores are used when num_proc > 2
8. run a map operation with processor affinity reset inside the function called via map
9. Watch as all cores run
### Expected behavior
All specified cores are used via the num_proc argument.
### Environment info
AWS sagemaker with the following init script run in the terminal after instance creation:
conda init bash
bash
conda activate pytorch_p310
pip install Wand PyPDF pytesseract datasets seqeval pdfplumber transformers pymupdf sentencepiece timm donut-python accelerate optimum xgboost
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
sudo yum -y install htop
sudo yum -y update
sudo yum -y install wget libstdc++ autoconf automake libtool autoconf-archive pkg-config gcc gcc-c++ make libjpeg-devel libpng-devel libtiff-devel zlib-devel | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5981/timeline | null | completed | null | null | 759.955833 | 1,664 |
https://api.github.com/repos/huggingface/datasets/issues/5980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5980/comments | https://api.github.com/repos/huggingface/datasets/issues/5980/events | https://github.com/huggingface/datasets/issues/5980 | 1,770,255,973 | I_kwDODunzps5pg_Zl | 5,980 | Viewing dataset card returns “502 Bad Gateway” | {
"avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4",
"events_url": "https://api.github.com/users/tbenthompson/events{/privacy}",
"followers_url": "https://api.github.com/users/tbenthompson/followers",
"following_url": "https://api.github.com/users/tbenthompson/following{/other_user}",
"gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tbenthompson",
"id": 4241811,
"login": "tbenthompson",
"node_id": "MDQ6VXNlcjQyNDE4MTE=",
"organizations_url": "https://api.github.com/users/tbenthompson/orgs",
"received_events_url": "https://api.github.com/users/tbenthompson/received_events",
"repos_url": "https://api.github.com/users/tbenthompson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tbenthompson",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Can you try again? Maybe there was a minor outage.",
"Yes, it seems to be working now. In case it's helpful, the outage lasted several days. It was failing as late as yesterday morning. ",
"we fixed something on the server side, glad it's fixed now"
] | 2023-06-22T19:14:48Z | 2023-06-27T08:38:19Z | 2023-06-26T14:42:45Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams
I am able to successfully view the “Files and versions” tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main)
Any help would be appreciated! Thanks! I hope this is the right place to report an issue like this.
| {
"avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4",
"events_url": "https://api.github.com/users/tbenthompson/events{/privacy}",
"followers_url": "https://api.github.com/users/tbenthompson/followers",
"following_url": "https://api.github.com/users/tbenthompson/following{/other_user}",
"gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tbenthompson",
"id": 4241811,
"login": "tbenthompson",
"node_id": "MDQ6VXNlcjQyNDE4MTE=",
"organizations_url": "https://api.github.com/users/tbenthompson/orgs",
"received_events_url": "https://api.github.com/users/tbenthompson/received_events",
"repos_url": "https://api.github.com/users/tbenthompson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tbenthompson",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5980/timeline | null | completed | null | null | 91.465833 | 1,665 |
https://api.github.com/repos/huggingface/datasets/issues/5975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5975/comments | https://api.github.com/repos/huggingface/datasets/issues/5975/events | https://github.com/huggingface/datasets/issues/5975 | 1,768,271,343 | I_kwDODunzps5pZa3v | 5,975 | Streaming Dataset behind Proxy - FileNotFoundError | {
"avatar_url": "https://avatars.githubusercontent.com/u/135350576?v=4",
"events_url": "https://api.github.com/users/Veluchs/events{/privacy}",
"followers_url": "https://api.github.com/users/Veluchs/followers",
"following_url": "https://api.github.com/users/Veluchs/following{/other_user}",
"gists_url": "https://api.github.com/users/Veluchs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Veluchs",
"id": 135350576,
"login": "Veluchs",
"node_id": "U_kgDOCBFJMA",
"organizations_url": "https://api.github.com/users/Veluchs/orgs",
"received_events_url": "https://api.github.com/users/Veluchs/received_events",
"repos_url": "https://api.github.com/users/Veluchs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Veluchs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Veluchs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Veluchs",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Duplicate of #",
"Hi ! can you try to set the upper case environment variables `HTTP_PROXY` and `HTTPS_PROXY` ?\r\n\r\nWe use `aiohttp` for streaming and it uses case sensitive environment variables",
"Hi, thanks for the quick reply.\r\n\r\nI set the uppercase env variables with\r\n\r\n`\r\nos.environ['HTTP_PR... | 2023-06-21T19:10:02Z | 2023-06-30T05:55:39Z | 2023-06-30T05:55:38Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.
Still i suspect that this is connected to being behind a proxy.
Is there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?
### Steps to reproduce the bug
This is the code i use.
```
import os
os.environ['http_proxy'] = "http://example.com:xxxx"
os.environ['https_proxy'] = "http://example.com:xxxx"
from datasets import load_dataset
ds = load_dataset("facebook/voxpopuli", name="de", streaming=True)
```
### Expected behavior
I would expect the streaming functionality to use the set proxy settings.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
| {
"avatar_url": "https://avatars.githubusercontent.com/u/135350576?v=4",
"events_url": "https://api.github.com/users/Veluchs/events{/privacy}",
"followers_url": "https://api.github.com/users/Veluchs/followers",
"following_url": "https://api.github.com/users/Veluchs/following{/other_user}",
"gists_url": "https://api.github.com/users/Veluchs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Veluchs",
"id": 135350576,
"login": "Veluchs",
"node_id": "U_kgDOCBFJMA",
"organizations_url": "https://api.github.com/users/Veluchs/orgs",
"received_events_url": "https://api.github.com/users/Veluchs/received_events",
"repos_url": "https://api.github.com/users/Veluchs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Veluchs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Veluchs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Veluchs",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5975/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5975/timeline | null | completed | null | null | 202.76 | 1,669 |
https://api.github.com/repos/huggingface/datasets/issues/5968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5968/comments | https://api.github.com/repos/huggingface/datasets/issues/5968/events | https://github.com/huggingface/datasets/issues/5968 | 1,765,252,561 | I_kwDODunzps5pN53R | 5,968 | Common Voice datasets still need `use_auth_token=True` | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"cc @pcuenca as well. \r\n\r\nNot super urgent btw",
"The issue commes from the dataset itself and is not related to the `datasets` lib\r\n\r\nsee https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1/blob/2c475b3b88e0f2e5828f830a4b91618a25ff20b7/common_voice_6_1.py#L148-L152",
"Let's remove these... | 2023-06-20T11:58:37Z | 2023-07-29T16:08:59Z | 2023-07-29T16:08:58Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
We don't need to pass `use_auth_token=True` anymore to download gated datasets or models, so the following should work if correctly logged in.
```py
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation")
```
However it throws an error - probably because something weird is hardcoded into the dataset loading script.
### Steps to reproduce the bug
1.)
```
huggingface-cli login
```
2.) Make sure that you have accepted the license here:
https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1
3.) Run:
```py
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation")
```
4.) You'll get:
```
File ~/hf/lib/python3.10/site-packages/datasets/builder.py:963, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
961 split_dict = SplitDict(dataset_name=self.name)
962 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 963 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
965 # Checksums verification
966 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_1/f4d7854c466f5bd4908988dbd39044ec4fc634d89e0515ab0c51715c0127ffe3/common_voice_6_1.py:150, in CommonVoice._split_generators(self, dl_manager)
148 hf_auth_token = dl_manager.download_config.use_auth_token
149 if hf_auth_token is None:
--> 150 raise ConnectionError(
151 "Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset"
152 )
154 bundle_url_template = STATS["bundleURLTemplate"]
155 bundle_version = bundle_url_template.split("/")[0]
ConnectionError: Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset
```
### Expected behavior
One should not have to pass `use_auth_token=True`. Also see discussion here: https://github.com/huggingface/blog/pull/1243#discussion_r1235131150
### Environment info
```
- `datasets` version: 2.13.0
- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.0.dev0
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5968/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5968/timeline | null | completed | null | null | 940.1725 | 1,675 |
https://api.github.com/repos/huggingface/datasets/issues/5965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5965/comments | https://api.github.com/repos/huggingface/datasets/issues/5965/events | https://github.com/huggingface/datasets/issues/5965 | 1,763,648,540 | I_kwDODunzps5pHyQc | 5,965 | "Couldn't cast array of type" in complex datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4",
"events_url": "https://api.github.com/users/piercefreeman/events{/privacy}",
"followers_url": "https://api.github.com/users/piercefreeman/followers",
"following_url": "https://api.github.com/users/piercefreeman/following{/other_user}",
"gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piercefreeman",
"id": 1712066,
"login": "piercefreeman",
"node_id": "MDQ6VXNlcjE3MTIwNjY=",
"organizations_url": "https://api.github.com/users/piercefreeman/orgs",
"received_events_url": "https://api.github.com/users/piercefreeman/received_events",
"repos_url": "https://api.github.com/users/piercefreeman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piercefreeman",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [
"Thanks for reporting! \r\n\r\nSpecifying the target features explicitly should avoid this error:\r\n```python\r\ndataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n features=datasets.Features({\"texts\": datase... | 2023-06-19T14:16:14Z | 2023-07-26T15:13:53Z | 2023-07-26T15:13:53Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.
This is prone to happen in batch mapping, when the mapper returns a sequence of null/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https://github.com/piercefreeman/lassen/pull/3)) but it feels like this ideally should be solved at the core library level.
Note that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided.
### Steps to reproduce the bug
A trivial reproduction case:
```python
from typing import Iterator, Any
import pandas as pd
from datasets import Dataset
def batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]:
for i in range(next(iter(lengths))):
yield {feature: values[i] for feature, values in batch.items()}
def examples_to_batch(examples) -> dict[str, list[Any]]:
batch = {}
for example in examples:
for feature, value in example.items():
if feature not in batch:
batch[feature] = []
batch[feature].append(value)
return batch
def batch_process(examples, explicit_schema: bool):
new_examples = []
for example in batch_to_examples(examples):
new_examples.append(dict(texts=example["raw_text"].split()))
return examples_to_batch(new_examples)
df = pd.DataFrame(
[
{"raw_text": ""},
{"raw_text": "This is a test"},
{"raw_text": "This is another test"},
]
)
dataset = Dataset.from_pandas(df)
# datasets won't be able to typehint a dataset that starts with an empty example.
with pytest.raises(TypeError, match="Couldn't cast array of type"):
dataset = dataset.map(
batch_process,
batched=True,
batch_size=1,
num_proc=1,
remove_columns=dataset.column_names,
)
```
This results in crashes like:
```bash
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 2109, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1998, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type string to null
```
### Expected behavior
The code should successfully map and create a new dataset without error.
### Environment info
Mac OSX, Linux | {
"avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4",
"events_url": "https://api.github.com/users/piercefreeman/events{/privacy}",
"followers_url": "https://api.github.com/users/piercefreeman/followers",
"following_url": "https://api.github.com/users/piercefreeman/following{/other_user}",
"gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piercefreeman",
"id": 1712066,
"login": "piercefreeman",
"node_id": "MDQ6VXNlcjE3MTIwNjY=",
"organizations_url": "https://api.github.com/users/piercefreeman/orgs",
"received_events_url": "https://api.github.com/users/piercefreeman/received_events",
"repos_url": "https://api.github.com/users/piercefreeman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piercefreeman",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5965/timeline | null | completed | null | null | 888.960833 | 1,678 |
https://api.github.com/repos/huggingface/datasets/issues/5963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5963/comments | https://api.github.com/repos/huggingface/datasets/issues/5963/events | https://github.com/huggingface/datasets/issues/5963 | 1,762,774,457 | I_kwDODunzps5pEc25 | 5,963 | Got an error _pickle.PicklingError use Dataset.from_spark. | {
"avatar_url": "https://avatars.githubusercontent.com/u/112800614?v=4",
"events_url": "https://api.github.com/users/yanzia12138/events{/privacy}",
"followers_url": "https://api.github.com/users/yanzia12138/followers",
"following_url": "https://api.github.com/users/yanzia12138/following{/other_user}",
"gists_url": "https://api.github.com/users/yanzia12138/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanzia12138",
"id": 112800614,
"login": "yanzia12138",
"node_id": "U_kgDOBrkzZg",
"organizations_url": "https://api.github.com/users/yanzia12138/orgs",
"received_events_url": "https://api.github.com/users/yanzia12138/received_events",
"repos_url": "https://api.github.com/users/yanzia12138/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanzia12138/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanzia12138/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanzia12138",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"i got error using method from_spark when using multi-node Spark cluster. seems could only use \"from_spark\" in local?",
"@lhoestq ",
"cc @maddiedawson it looks like there an issue with `_validate_cache_dir` ?\r\n\r\nIt looks like the function passed to mapPartitions has a reference to the Spark dataset build... | 2023-06-19T05:30:35Z | 2023-07-24T11:55:46Z | 2023-07-24T11:55:46Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | python 3.9.2
Got an error _pickle.PicklingError use Dataset.from_spark.
Did the dataset import load data from spark dataframe using multi-node Spark cluster
df = spark.read.parquet(args.input_data).repartition(50)
ds = Dataset.from_spark(df, keep_in_memory=True,
cache_dir="/pnc-data/data/nuplan/t5_spark/cache_data")
ds.save_to_disk(args.output_data)
Error :
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma
tion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
_Originally posted by @yanzia12138 in https://github.com/huggingface/datasets/issues/5701#issuecomment-1594674306_
W
Traceback (most recent call last):
File "/home/work/main.py", line 100, in <module>
run(args)
File "/home/work/main.py", line 80, in run
ds = Dataset.from_spark(df1, keep_in_memory=True,
File "/home/work/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1281, in from_spark
return SparkDatasetReader(
File "/home/work/.local/lib/python3.9/site-packages/datasets/io/spark.py", line 53, in read
self.builder.download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 254, in _prepare_split
self._validate_cache_dir()
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 122, in _validate_cache_dir
self._spark.sparkContext.parallelize(range(1), 1).mapPartitions(create_cache_and_write_probe).collect()
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 950, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2951, in _jrdd
wrapped_func = _wrap_function(self.ctx, self.func, self._prev_jrdd_deserializer,
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2830, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2816, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/serializers.py", line 447, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. S
parkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/19 13:51:21 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5963/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5963/timeline | null | completed | null | null | 846.419722 | 1,680 |
https://api.github.com/repos/huggingface/datasets/issues/5959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5959/comments | https://api.github.com/repos/huggingface/datasets/issues/5959/events | https://github.com/huggingface/datasets/issues/5959 | 1,757,397,507 | I_kwDODunzps5ov8ID | 5,959 | read metric glue.py from local file | {
"avatar_url": "https://avatars.githubusercontent.com/u/31148397?v=4",
"events_url": "https://api.github.com/users/JiazhaoLi/events{/privacy}",
"followers_url": "https://api.github.com/users/JiazhaoLi/followers",
"following_url": "https://api.github.com/users/JiazhaoLi/following{/other_user}",
"gists_url": "https://api.github.com/users/JiazhaoLi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JiazhaoLi",
"id": 31148397,
"login": "JiazhaoLi",
"node_id": "MDQ6VXNlcjMxMTQ4Mzk3",
"organizations_url": "https://api.github.com/users/JiazhaoLi/orgs",
"received_events_url": "https://api.github.com/users/JiazhaoLi/received_events",
"repos_url": "https://api.github.com/users/JiazhaoLi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JiazhaoLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JiazhaoLi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JiazhaoLi",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Sorry, I solve this by call `evaluate.load('glue_metric.py','sst-2')`\r\n"
] | 2023-06-14T17:59:35Z | 2023-06-14T18:04:16Z | 2023-06-14T18:04:16Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Currently, The server is off-line. I am using the glue metric from the local file downloaded from the hub.
I download / cached datasets using `load_dataset('glue','sst2', cache_dir='/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx/glue.py','sst2', cache_dir='/xxx')`. I can successfully reuse cached datasets.
My problem is about the load_metric.
When I run `load_dataset('xxx/glue_metric.py','sst2',cache_dir='/xxx')` , it returns
` File "xx/lib64/python3.9/site-packages/datasets/utils/deprecation_utils.py", line 46, in wrapper
return deprecated_function(*args, **kwargs)
File "xx//lib64/python3.9/site-packages/datasets/load.py", line 1392, in load_metric
metric = metric_cls(
TypeError: 'NoneType' object is not callable`
Thanks in advance for help!
### Steps to reproduce the bug
N/A
### Expected behavior
N/A
### Environment info
`datasets == 2.12.0` | {
"avatar_url": "https://avatars.githubusercontent.com/u/31148397?v=4",
"events_url": "https://api.github.com/users/JiazhaoLi/events{/privacy}",
"followers_url": "https://api.github.com/users/JiazhaoLi/followers",
"following_url": "https://api.github.com/users/JiazhaoLi/following{/other_user}",
"gists_url": "https://api.github.com/users/JiazhaoLi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JiazhaoLi",
"id": 31148397,
"login": "JiazhaoLi",
"node_id": "MDQ6VXNlcjMxMTQ4Mzk3",
"organizations_url": "https://api.github.com/users/JiazhaoLi/orgs",
"received_events_url": "https://api.github.com/users/JiazhaoLi/received_events",
"repos_url": "https://api.github.com/users/JiazhaoLi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JiazhaoLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JiazhaoLi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JiazhaoLi",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5959/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5959/timeline | null | completed | null | null | 0.078056 | 1,683 |
https://api.github.com/repos/huggingface/datasets/issues/5955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5955/comments | https://api.github.com/repos/huggingface/datasets/issues/5955/events | https://github.com/huggingface/datasets/issues/5955 | 1,756,827,133 | I_kwDODunzps5otw39 | 5,955 | Strange bug in loading local JSON files, using load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/73934131?v=4",
"events_url": "https://api.github.com/users/Night-Quiet/events{/privacy}",
"followers_url": "https://api.github.com/users/Night-Quiet/followers",
"following_url": "https://api.github.com/users/Night-Quiet/following{/other_user}",
"gists_url": "https://api.github.com/users/Night-Quiet/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night-Quiet",
"id": 73934131,
"login": "Night-Quiet",
"node_id": "MDQ6VXNlcjczOTM0MTMx",
"organizations_url": "https://api.github.com/users/Night-Quiet/orgs",
"received_events_url": "https://api.github.com/users/Night-Quiet/received_events",
"repos_url": "https://api.github.com/users/Night-Quiet/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night-Quiet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night-Quiet/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night-Quiet",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This is the actual error:\r\n```\r\nFailed to read file '/home/lakala/hjc/code/pycode/glm/temp.json' with error <class 'pyarrow.lib.ArrowInvalid'>: cannot mix list and non-list, non-null values\r\n```\r\nWhich means some samples are incorrectly formatted.\r\n\r\nPyArrow, a storage backend that we use under the hoo... | 2023-06-14T12:46:00Z | 2023-06-21T14:42:15Z | 2023-06-21T14:42:15Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I am using 'load_dataset 'loads a JSON file, but I found a strange bug: an error will be reported when the length of the JSON file exceeds 160000 (uncertain exact number). I have checked the data through the following code and there are no issues. So I cannot determine the true reason for this error.
The data is a list containing a dictionary. As follows:
[
{'input': 'someting...', 'target': 'someting...', 'type': 'someting...', 'history': ['someting...', ...]},
...
]
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
path = "target.json"
temp_path = "temp.json"
with open(path, "r") as f:
data = json.load(f)
print(f"\n-------the JSON file length is: {len(data)}-------\n")
with open(temp_path, "w") as f:
json.dump(data[:160000], f)
dataset = load_dataset("json", data_files=temp_path)
print("\n-------This works when the JSON file length is 160000-------\n")
with open(temp_path, "w") as f:
json.dump(data[160000:], f)
dataset = load_dataset("json", data_files=temp_path)
print("\n-------This works and eliminates data issues-------\n")
with open(temp_path, "w") as f:
json.dump(data[:170000], f)
dataset = load_dataset("json", data_files=temp_path)
```
### Expected behavior
```
-------the JSON file length is: 173049-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 3328.81it/s]
Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 639.47it/s]
Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.
100%|████████████████████████████████████████████| 1/1 [00:00<00:00, 265.85it/s]
-------This works when the JSON file length is 160000-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 2038.05it/s]
Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 794.83it/s]
Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.
100%|████████████████████████████████████████████| 1/1 [00:00<00:00, 681.00it/s]
-------This works and eliminates data issues-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-63f391c89599c7b0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 3682.44it/s]
Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 788.70it/s]
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/home/lakala/hjc/code/pycode/glm/temp.json' with error <class 'pyarrow.lib.ArrowInvalid'>: cannot mix list and non-list, non-null values
Traceback (most recent call last):
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
for _, table in generator:
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 146, in _generate_tables
raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
ValueError: Not able to read records in the JSON file at /home/lakala/hjc/code/pycode/glm/temp.json.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/lakala/hjc/code/pycode/glm/test.py", line 22, in <module>
dataset = load_dataset("json", data_files=temp_path)
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1746, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1891, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
```
Ubuntu==22.04
python==3.8
pytorch-transformers==1.2.0
transformers== 4.27.1
datasets==2.12.0
numpy==1.24.3
pandas==1.5.3
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5955/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5955/timeline | null | completed | null | null | 169.9375 | 1,687 |
https://api.github.com/repos/huggingface/datasets/issues/5953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5953/comments | https://api.github.com/repos/huggingface/datasets/issues/5953/events | https://github.com/huggingface/datasets/issues/5953 | 1,756,520,523 | I_kwDODunzps5osmBL | 5,953 | Bad error message when trying to download gated dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"cc @sanchit-gandhi @Vaibhavs10 @lhoestq - this is mainly for demos that use Common Voice datasets as done here: https://github.com/facebookresearch/fairseq/tree/main/examples/mms#-transformers\r\n",
"Hi ! the error for me is\r\n\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at /content/mozilla-foun... | 2023-06-14T10:03:39Z | 2023-06-14T16:36:51Z | 2023-06-14T12:26:32Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When I attempt to download a model from the Hub that is gated without being logged in, I get a nice error message. E.g.:
E.g.
```sh
Repository Not Found for url: https://huggingface.co/api/models/DeepFloyd/IF-I-XL-v1.0.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password..
Will try to load from local cache.
```
If I do the same for a gated dataset on the Hub, I'm not gated a nice error message IMO:
```sh
File ~/hf/lib/python3.10/site-packages/fsspec/implementations/http.py:430, in HTTPFileSystem._info(self, url, **kwargs)
427 except Exception as exc:
428 if policy == "get":
429 # If get failed, then raise a FileNotFoundError
--> 430 raise FileNotFoundError(url) from exc
431 logger.debug(str(exc))
433 return {"name": url, "size": None, **info, "type": "file"}
FileNotFoundError: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0/resolve/main/n_shards.json
```
### Steps to reproduce the bug
```
huggingface-cli logout
```
and then:
```py
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# Swahili
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "sw", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
sw_sample = next(iter(stream_data))["audio"]["array"]
```
### Expected behavior
Better error message
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.12.0
- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.0.dev0
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5953/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5953/timeline | null | completed | null | null | 2.381389 | 1,689 |
https://api.github.com/repos/huggingface/datasets/issues/5951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5951/comments | https://api.github.com/repos/huggingface/datasets/issues/5951/events | https://github.com/huggingface/datasets/issues/5951 | 1,756,363,546 | I_kwDODunzps5or_sa | 5,951 | What is the Right way to use discofuse dataset?? | {
"avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4",
"events_url": "https://api.github.com/users/akesh1235/events{/privacy}",
"followers_url": "https://api.github.com/users/akesh1235/followers",
"following_url": "https://api.github.com/users/akesh1235/following{/other_user}",
"gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/akesh1235",
"id": 125154243,
"login": "akesh1235",
"node_id": "U_kgDOB3Wzww",
"organizations_url": "https://api.github.com/users/akesh1235/orgs",
"received_events_url": "https://api.github.com/users/akesh1235/received_events",
"repos_url": "https://api.github.com/users/akesh1235/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions",
"type": "User",
"url": "https://api.github.com/users/akesh1235",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for opening https://huggingface.co/datasets/discofuse/discussions/3, let's continue the discussion over there if you don't mind",
"I have posted there also sir, please check\r\n@lhoestq"
] | 2023-06-14T08:38:39Z | 2023-06-14T13:25:06Z | 2023-06-14T12:10:16Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | [Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
**Below is the following way, as per my understanding , Is it correct :question: :question:**
The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are:
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
1. **coherent_first_sentence**
2. **coherent_second_sentence**
3. **incoherent_first_sentence**
4. **incoherent_second_sentence**
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
The **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.**
The **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion.
Please correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically? | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5951/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5951/timeline | null | completed | null | null | 3.526944 | 1,691 |
https://api.github.com/repos/huggingface/datasets/issues/5945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5945/comments | https://api.github.com/repos/huggingface/datasets/issues/5945/events | https://github.com/huggingface/datasets/issues/5945 | 1,754,084,577 | I_kwDODunzps5ojTTh | 5,945 | Failing to upload dataset to the hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/77382661?v=4",
"events_url": "https://api.github.com/users/Ar770/events{/privacy}",
"followers_url": "https://api.github.com/users/Ar770/followers",
"following_url": "https://api.github.com/users/Ar770/following{/other_user}",
"gists_url": "https://api.github.com/users/Ar770/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ar770",
"id": 77382661,
"login": "Ar770",
"node_id": "MDQ6VXNlcjc3MzgyNjYx",
"organizations_url": "https://api.github.com/users/Ar770/orgs",
"received_events_url": "https://api.github.com/users/Ar770/received_events",
"repos_url": "https://api.github.com/users/Ar770/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ar770/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ar770/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ar770",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! Feel free to re-run your code later, it will resume automatically where you left",
"Tried many times in the last 2 weeks, problem remains.",
"Alternatively you can save your dataset in parquet files locally and upload them to the hub manually\r\n\r\n```python\r\nfrom tqdm import tqdm\r\nnum_shards = 60\r\... | 2023-06-13T05:46:46Z | 2023-07-24T11:56:40Z | 2023-07-24T11:56:40Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Trying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work.
From time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable.
Please help.
I'm trying to upload the dataset for almost a week.
Thanks
### Steps to reproduce the bug
not relevant
### Expected behavior
Be able to upload thedataset
### Environment info
python: 3.9 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5945/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5945/timeline | null | completed | null | null | 990.165 | 1,697 |
https://api.github.com/repos/huggingface/datasets/issues/5941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5941/comments | https://api.github.com/repos/huggingface/datasets/issues/5941/events | https://github.com/huggingface/datasets/issues/5941 | 1,751,838,897 | I_kwDODunzps5oavCx | 5,941 | Load Data Sets Too Slow In Train Seq2seq Model | {
"avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4",
"events_url": "https://api.github.com/users/xyx361100238/events{/privacy}",
"followers_url": "https://api.github.com/users/xyx361100238/followers",
"following_url": "https://api.github.com/users/xyx361100238/following{/other_user}",
"gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xyx361100238",
"id": 19569322,
"login": "xyx361100238",
"node_id": "MDQ6VXNlcjE5NTY5MzIy",
"organizations_url": "https://api.github.com/users/xyx361100238/orgs",
"received_events_url": "https://api.github.com/users/xyx361100238/received_events",
"repos_url": "https://api.github.com/users/xyx361100238/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xyx361100238",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! you can speed it up using multiprocessing by passing `num_proc=` to `load_dataset()`",
"already did,but not useful for step Generating train split,it works in step \"Resolving data files\" & \"Downloading data files\" ",
"@mariosasko some advice , thanks!",
"I met the same problem, terrible experience... | 2023-06-12T03:58:43Z | 2023-08-15T02:52:22Z | 2023-08-15T02:52:22Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
step 'Generating train split' in load_dataset is too slow:

### Steps to reproduce the bug
Data: own data,16K16B Mono wav
Oficial Script:[ run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py)
Add Code:
if data_args.data_path is not None:
print(data_args.data_path)
raw_datasets = load_dataset("audiofolder", data_dir=data_args.data_path, cache_dir=model_args.cache_dir)
raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000))
raw_datasets = raw_datasets["train"].train_test_split(test_size=0.005, shuffle=True)
(change cache_dir to other path ,ex:/DATA/cache)
### Expected behavior
load data fast,at least 1000+
`Generating train split: 387875 examples [32:24:45, 1154.83 examples/s]`
### Environment info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.4.0-149-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in> | {
"avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4",
"events_url": "https://api.github.com/users/xyx361100238/events{/privacy}",
"followers_url": "https://api.github.com/users/xyx361100238/followers",
"following_url": "https://api.github.com/users/xyx361100238/following{/other_user}",
"gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xyx361100238",
"id": 19569322,
"login": "xyx361100238",
"node_id": "MDQ6VXNlcjE5NTY5MzIy",
"organizations_url": "https://api.github.com/users/xyx361100238/orgs",
"received_events_url": "https://api.github.com/users/xyx361100238/received_events",
"repos_url": "https://api.github.com/users/xyx361100238/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xyx361100238",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5941/timeline | null | completed | null | null | 1,534.894167 | 1,700 |
https://api.github.com/repos/huggingface/datasets/issues/5939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5939/comments | https://api.github.com/repos/huggingface/datasets/issues/5939/events | https://github.com/huggingface/datasets/issues/5939 | 1,749,955,883 | I_kwDODunzps5oTjUr | 5,939 | . | {
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flckv",
"id": 103381497,
"login": "flckv",
"node_id": "U_kgDOBil5-Q",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"repos_url": "https://api.github.com/users/flckv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flckv",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2023-06-09T14:01:34Z | 2023-06-12T12:19:34Z | 2023-06-12T12:19:19Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flckv",
"id": 103381497,
"login": "flckv",
"node_id": "U_kgDOBil5-Q",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"repos_url": "https://api.github.com/users/flckv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flckv",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5939/timeline | null | completed | null | null | 70.295833 | 1,702 |
https://api.github.com/repos/huggingface/datasets/issues/5936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5936/comments | https://api.github.com/repos/huggingface/datasets/issues/5936/events | https://github.com/huggingface/datasets/issues/5936 | 1,748,424,388 | I_kwDODunzps5oNtbE | 5,936 | Sequence of array not supported for most dtype | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Related, `float16` is the only dtype not supported by `Array2D` (probably by every `ArrayND`):\r\n\r\n```python\r\nfrom datasets import Array2D, Features, Dataset\r\n\r\nimport numpy as np\r\n\r\nfor dtype in [\r\n \"bool\", # ok\r\n \"int8\", # ok\r\n \"int16\", # ok\r\n \"int32\", # ok\r\n \"i... | 2023-06-08T18:18:07Z | 2023-06-14T15:03:34Z | 2023-06-14T15:03:34Z | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Create a dataset composed of sequence of array fails for most dtypes (see code below).
### Steps to reproduce the bug
```python
from datasets import Sequence, Array2D, Features, Dataset
import numpy as np
for dtype in [
"bool", # ok
"int8", # failed
"int16", # failed
"int32", # failed
"int64", # ok
"uint8", # failed
"uint16", # failed
"uint32", # failed
"uint64", # failed
"float16", # failed
"float32", # failed
"float64", # ok
]:
features = Features({"foo": Sequence(Array2D(dtype=dtype, shape=(2, 2)))})
sequence = [
[[1.0, 2.0], [3.0, 4.0]],
[[5.0, 6.0], [7.0, 8.0]],
]
array = np.array(sequence, dtype=dtype)
try:
dataset = Dataset.from_dict({"foo": [array]}, features=features)
except Exception as e:
print(f"Failed for dtype={dtype}")
```
Traceback for `dtype="int8"`:
```
Traceback (most recent call last):
File "/home/qgallouedec/datasets/a.py", line 29, in <module>
raise e
File "/home/qgallouedec/datasets/a.py", line 26, in <module>
dataset = Dataset.from_dict({"foo": [array]}, features=features)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 899, in from_dict
pa_table = InMemoryTable.from_pydict(mapping=mapping)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 799, in from_pydict
return cls(pa.Table.from_pydict(*args, **kwargs))
File "pyarrow/table.pxi", line 3725, in pyarrow.lib.Table.from_pydict
File "pyarrow/table.pxi", line 5254, in pyarrow.lib._from_pydict
File "pyarrow/array.pxi", line 350, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 236, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 204, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2091, in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2139, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1967, in array_cast
return pa_type.wrap_array(array)
File "pyarrow/types.pxi", line 879, in pyarrow.lib.BaseExtensionType.wrap_array
TypeError: Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: int8>>, got list<item: list<item: int64>>
```
### Expected behavior
Not to fail.
### Environment info
- Python 3.10.6
- datasets: master branch
- Numpy: 1.23.4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5936/timeline | null | completed | null | null | 140.7575 | 1,705 |
https://api.github.com/repos/huggingface/datasets/issues/5931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5931/comments | https://api.github.com/repos/huggingface/datasets/issues/5931/events | https://github.com/huggingface/datasets/issues/5931 | 1,745,408,784 | I_kwDODunzps5oCNMQ | 5,931 | `datasets.map` not reusing cached copy by default | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This can happen when a map transform cannot be hashed deterministically (e.g., an object referenced by the transform changes its state after the first call - an issue with fast tokenizers). The solution is to provide `cache_file_name` in the `map` call to check this file for the cached result instead of relying on... | 2023-06-07T09:03:33Z | 2023-06-21T16:15:40Z | 2023-06-21T16:15:40Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?
One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?
### Steps to reproduce the bug
```
# make sure that dataset decodes audio with correct sampling rate
dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate
if dataset_sampling_rate != self.feature_extractor.sampling_rate:
self.raw_datasets = self.raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)
)
vectorized_datasets = self.raw_datasets.map(
self.prepare_dataset,
remove_columns=next(iter(self.raw_datasets.values())).column_names,
num_proc=self.num_workers,
desc="preprocess datasets",
)
# filter data that is longer than max_input_length
self.vectorized_datasets = vectorized_datasets.filter(
self.is_audio_in_length_range,
num_proc=self.num_workers,
input_columns=["input_length"],
)
def prepare_dataset(self, batch):
# load audio
sample = batch["audio"]
inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
batch["labels"] = self.tokenizer(batch["target_text"]).input_ids
return batch
```
### Expected behavior
`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5931/timeline | null | completed | null | null | 343.201944 | 1,710 |
https://api.github.com/repos/huggingface/datasets/issues/5930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5930/comments | https://api.github.com/repos/huggingface/datasets/issues/5930/events | https://github.com/huggingface/datasets/issues/5930 | 1,745,184,395 | I_kwDODunzps5oBWaL | 5,930 | loading private custom dataset script - authentication error | {
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flckv",
"id": 103381497,
"login": "flckv",
"node_id": "U_kgDOBil5-Q",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"repos_url": "https://api.github.com/users/flckv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flckv",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This issue seems to have been resolved, so I'm closing it."
] | 2023-06-07T06:58:23Z | 2023-06-15T14:49:21Z | 2023-06-15T14:49:20Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Train model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ?
I am logged in in the terminal, in the browser. I receive this error:
/python3.8/site-packages/datasets/utils/file_utils.py", line 566, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels `(ConnectionError('Unauthorized for URL `https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels. Please use the parameter `**`use_auth_token=True`**` after logging in with `**`huggingface-cli login`**`'))
when I added: `use_auth_token=True` and logged in via terminal then I received error:
or the same error in different format:
raise ConnectionError(f"`Couldn't reach {url} (error {response.status_code}`)")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels (`error 401`)
### Steps to reproduce the bug
1. cloned transformers library locally:
https://huggingface.co/docs/transformers/v4.15.0/examples :
> git clone https://github.com/huggingface/transformers
> cd transformers
> pip install .
> cd /transformers/examples/pytorch/audio-classification
> pip install -r requirements.txt
2. created **loading script**
> https://huggingface.co/docs/datasets/dataset_script added next to dataset:
3. uploaded **private custom dataset** with loading script to HuggingFace
> https://huggingface.co/docs/datasets/dataset_script
4. added dataset loading script to **local directory** in the above cloned transformers library:
> cd /transformers/examples/pytorch/audio-classification
5. logged in to HuggingFace on local terminal with :
> **huggingface-cli login**
6. run the model with the custom dataset stored on HuggingFace with code: https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md
cd /transformers/examples/pytorch/audio-classification
> python run_audio_classification.py \
> --model_name_or_path facebook/wav2vec2-base \
> --output_dir l/users/flck/outputs/wav2vec2-base-s \
> --overwrite_output_dir \
> --dataset_name s \
> --dataset_config_name s \
> --remove_unused_columns False \
> --do_train \
> --do_eval \
> --fp16 \
> --learning_rate 3e-5 \
> --max_length_seconds 1 \
> --attention_mask False \
> --warmup_ratio 0.1 \
> --num_train_epochs 5 \
> --per_device_train_batch_size 32 \
> --gradient_accumulation_steps 4 \
> --per_device_eval_batch_size 32 \
> --dataloader_num_workers 4 \
> --logging_strategy steps \
> --logging_steps 10 \
> --evaluation_strategy epoch \
> --save_strategy epoch \
> --load_best_model_at_end True \
> --metric_for_best_model accuracy \
> --save_total_limit 3 \
> --seed 0 \
> --push_to_hub \
> **--use_auth_token=True**
### Expected behavior
Be able to train a model the https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/ run_audio_classification.py with private custom dataset stored on HuggingFace.
### Environment info
- datasets version: 2.12.0
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5930/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5930/timeline | null | completed | null | null | 199.849167 | 1,711 |
https://api.github.com/repos/huggingface/datasets/issues/5929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5929/comments | https://api.github.com/repos/huggingface/datasets/issues/5929/events | https://github.com/huggingface/datasets/issues/5929 | 1,744,478,456 | I_kwDODunzps5n-qD4 | 5,929 | Importing PyTorch reduces multiprocessing performance for map | {
"avatar_url": "https://avatars.githubusercontent.com/u/12814709?v=4",
"events_url": "https://api.github.com/users/Maxscha/events{/privacy}",
"followers_url": "https://api.github.com/users/Maxscha/followers",
"following_url": "https://api.github.com/users/Maxscha/following{/other_user}",
"gists_url": "https://api.github.com/users/Maxscha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Maxscha",
"id": 12814709,
"login": "Maxscha",
"node_id": "MDQ6VXNlcjEyODE0NzA5",
"organizations_url": "https://api.github.com/users/Maxscha/orgs",
"received_events_url": "https://api.github.com/users/Maxscha/received_events",
"repos_url": "https://api.github.com/users/Maxscha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Maxscha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maxscha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Maxscha",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! The times match when I run this code locally or on Colab.\r\n\r\nAlso, we use `multiprocess`, not `multiprocessing`, for parallelization, and torch's `__init__.py` (executed on `import torch` ) slightly modifies the latter.",
"Hey Mariosasko,\r\n\r\nThanks for looking into it. We further did some investigati... | 2023-06-06T19:42:25Z | 2023-06-16T13:09:12Z | 2023-06-16T13:09:12Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported.
### Steps to reproduce the bug
I created two example scripts to reproduce this behavior:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
Takes around 4 seconds on my machine.
While the same code, but with an `import torch`:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
import torch
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
takes around 22 seconds.
### Expected behavior
I would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
- torch: 2.0.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/12814709?v=4",
"events_url": "https://api.github.com/users/Maxscha/events{/privacy}",
"followers_url": "https://api.github.com/users/Maxscha/followers",
"following_url": "https://api.github.com/users/Maxscha/following{/other_user}",
"gists_url": "https://api.github.com/users/Maxscha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Maxscha",
"id": 12814709,
"login": "Maxscha",
"node_id": "MDQ6VXNlcjEyODE0NzA5",
"organizations_url": "https://api.github.com/users/Maxscha/orgs",
"received_events_url": "https://api.github.com/users/Maxscha/received_events",
"repos_url": "https://api.github.com/users/Maxscha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Maxscha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maxscha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Maxscha",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5929/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5929/timeline | null | completed | null | null | 233.446389 | 1,712 |
https://api.github.com/repos/huggingface/datasets/issues/5927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5927/comments | https://api.github.com/repos/huggingface/datasets/issues/5927/events | https://github.com/huggingface/datasets/issues/5927 | 1,744,009,032 | I_kwDODunzps5n83dI | 5,927 | `IndexError` when indexing `Sequence` of `Array2D` with `None` values | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Easy fix would be to add:\r\n\r\n```python\r\nnull_indices -= np.arange(len(null_indices))\r\n```\r\n\r\nbefore L279, but I'm not sure it's the most intuitive way to fix it.",
"Same issue here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7fcbe5b1575c8d162b65b9397b3dfda995a4e048/src/datasets/features/feat... | 2023-06-06T14:36:22Z | 2023-06-13T12:39:39Z | 2023-06-09T13:23:50Z | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Having `None` values in a `Sequence` of `ArrayND` fails.
### Steps to reproduce the bug
```python
from datasets import Array2D, Dataset, Features, Sequence
data = [
[
[[0]],
None,
None,
]
]
feature = Sequence(Array2D((1, 1), dtype="int64"))
dataset = Dataset.from_dict({"a": data}, features=Features({"a": feature}))
dataset[0] # error raised only when indexing
```
```
Traceback (most recent call last):
File "/Users/quentingallouedec/gia/c.py", line 13, in <module>
dataset[0] # error raised only when indexing
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2658, in __getitem__
return self._getitem(key)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2643, in _getitem
formatted_output = format_table(
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 634, in format_table
return formatter(pa_table, query_type=query_type)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 406, in __call__
return self.format_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 441, in format_row
row = self.python_arrow_extractor().extract_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 144, in extract_row
return _unnest(pa_table.to_pydict())
File "pyarrow/table.pxi", line 4146, in pyarrow.lib.Table.to_pydict
File "pyarrow/table.pxi", line 1312, in pyarrow.lib.ChunkedArray.to_pylist
File "pyarrow/array.pxi", line 1521, in pyarrow.lib.Array.to_pylist
File "pyarrow/scalar.pxi", line 675, in pyarrow.lib.ListScalar.as_py
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 760, in to_pylist
return self.to_numpy(zero_copy_only=zero_copy_only).tolist()
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 725, in to_numpy
numpy_arr = np.insert(numpy_arr.astype(np.float64), null_indices, np.nan, axis=0)
File "<__array_function__ internals>", line 200, in insert
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/numpy/lib/function_base.py", line 5426, in insert
old_mask[indices] = False
IndexError: index 3 is out of bounds for axis 0 with size 3
```
AFAIK, the problem only occurs when you use a `Sequence` of `ArrayND`.
I strongly suspect that the problem comes from this line, or `np.insert` is misused:
https://github.com/huggingface/datasets/blob/02ee418831aba68d0be93227bce8b3f42ef8980f/src/datasets/features/features.py#L729
To put t simply, you want something that do that:
```python
import numpy as np
numpy_arr = np.zeros((1, 1, 1))
null_indices = np.array([1, 2])
np.insert(numpy_arr, null_indices, np.nan, axis=0)
# raise an error, instead of outputting
# array([[[ 0.]],
# [[nan]],
# [[nan]]])
```
### Expected behavior
The previous code should not raise an error.
### Environment info
- Python 3.10.11
- datasets 2.10.0
- pyarrow 12.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5927/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5927/timeline | null | completed | null | null | 70.791111 | 1,714 |
https://api.github.com/repos/huggingface/datasets/issues/5925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5925/comments | https://api.github.com/repos/huggingface/datasets/issues/5925/events | https://github.com/huggingface/datasets/issues/5925 | 1,741,941,436 | I_kwDODunzps5n0-q8 | 5,925 | Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/78868366?v=4",
"events_url": "https://api.github.com/users/mtkinit/events{/privacy}",
"followers_url": "https://api.github.com/users/mtkinit/followers",
"following_url": "https://api.github.com/users/mtkinit/following{/other_user}",
"gists_url": "https://api.github.com/users/mtkinit/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mtkinit",
"id": 78868366,
"login": "mtkinit",
"node_id": "MDQ6VXNlcjc4ODY4MzY2",
"organizations_url": "https://api.github.com/users/mtkinit/orgs",
"received_events_url": "https://api.github.com/users/mtkinit/received_events",
"repos_url": "https://api.github.com/users/mtkinit/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mtkinit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtkinit/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mtkinit",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2023-06-05T14:46:04Z | 2023-06-19T17:22:43Z | 2023-06-19T17:22:43Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hi all,
after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`.
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
Thanks,
Martin
### Steps to reproduce the bug
Here, the code crashed after we updated the `datasets` library:
```python
# list_datasets no longer returns a list, which leads to an error when one tries to slice it
for datasets.list_datasets(with_details=True)[:limit]:
...
```
### Expected behavior
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
### Environment info
Ubuntu 22.04
datasets 2.12.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5925/timeline | null | completed | null | null | 338.610833 | 1,716 |
https://api.github.com/repos/huggingface/datasets/issues/5923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5923/comments | https://api.github.com/repos/huggingface/datasets/issues/5923/events | https://github.com/huggingface/datasets/issues/5923 | 1,737,436,227 | I_kwDODunzps5njyxD | 5,923 | Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/71412682?v=4",
"events_url": "https://api.github.com/users/ehuangc/events{/privacy}",
"followers_url": "https://api.github.com/users/ehuangc/followers",
"following_url": "https://api.github.com/users/ehuangc/following{/other_user}",
"gists_url": "https://api.github.com/users/ehuangc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ehuangc",
"id": 71412682,
"login": "ehuangc",
"node_id": "MDQ6VXNlcjcxNDEyNjgy",
"organizations_url": "https://api.github.com/users/ehuangc/orgs",
"received_events_url": "https://api.github.com/users/ehuangc/received_events",
"repos_url": "https://api.github.com/users/ehuangc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ehuangc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehuangc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ehuangc",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Based on https://github.com/rapidsai/cudf/issues/10187, this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n\r\nCan you please execute the following commands in the terminal and paste the output here?\r\n```\r\nconda list | grep arrow\r\n``` \r\n```\r\npython -c \"import pyarrow; ... | 2023-06-02T04:16:32Z | 2024-06-27T10:07:49Z | 2024-02-25T16:38:03Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When trying to import datasets, I get a pyarrow ValueError:
Traceback (most recent call last):
File "/Users/edward/test/test.py", line 1, in <module>
import datasets
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_reader.py", line 28, in <module>
import pyarrow.parquet as pq
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 45, in <module>
from pyarrow.fs import (LocalFileSystem, FileSystem, FileType,
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/fs.py", line 49, in <module>
from pyarrow._gcsfs import GcsFileSystem # noqa
File "pyarrow/_gcsfs.pyx", line 1, in init pyarrow._gcsfs
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
`import datasets`
### Expected behavior
Successful import
### Environment info
Conda environment, MacOS
python 3.9.12
datasets 2.12.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5923/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5923/timeline | null | completed | null | null | 6,444.358611 | 1,718 |
https://api.github.com/repos/huggingface/datasets/issues/5922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5922/comments | https://api.github.com/repos/huggingface/datasets/issues/5922/events | https://github.com/huggingface/datasets/issues/5922 | 1,736,898,953 | I_kwDODunzps5nhvmJ | 5,922 | Length of table does not accurately reflect the split | {
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amogkam",
"id": 8068268,
"login": "amogkam",
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"repos_url": "https://api.github.com/users/amogkam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amogkam",
"user_view_type": "public"
} | [
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] | closed | false | null | [] | null | [
"As already replied by @lhoestq (private channel):\r\n> `.train_test_split` (as well as `.shard`, `.select`) doesn't create a new arrow table to save time and disk space. Instead, it uses an indices mapping on top of the table that locate which examples are part of train or test.",
"This is an optimization that w... | 2023-06-01T18:56:26Z | 2023-06-02T16:13:31Z | 2023-06-02T16:13:31Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not.
### Steps to reproduce the bug

### Expected behavior
The expected behavior is when `len(hf_dataset["train"].data)` should match the length of the train split, and not be the entire unsplit dataset.
### Environment info
datasets 2.10.1
python 3.10.11 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5922/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5922/timeline | null | completed | null | null | 21.284722 | 1,719 |
https://api.github.com/repos/huggingface/datasets/issues/5913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5913/comments | https://api.github.com/repos/huggingface/datasets/issues/5913/events | https://github.com/huggingface/datasets/issues/5913 | 1,731,427,484 | I_kwDODunzps5nM3yc | 5,913 | I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17508662?v=4",
"events_url": "https://api.github.com/users/cjt222/events{/privacy}",
"followers_url": "https://api.github.com/users/cjt222/followers",
"following_url": "https://api.github.com/users/cjt222/following{/other_user}",
"gists_url": "https://api.github.com/users/cjt222/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cjt222",
"id": 17508662,
"login": "cjt222",
"node_id": "MDQ6VXNlcjE3NTA4NjYy",
"organizations_url": "https://api.github.com/users/cjt222/orgs",
"received_events_url": "https://api.github.com/users/cjt222/received_events",
"repos_url": "https://api.github.com/users/cjt222/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cjt222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cjt222/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cjt222",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @cjt222.\r\n\r\nWhat is the structure of your JSON files. Please note that it is normally simpler if the data file format is JSON-Lines instead. ",
"> Thanks for reporting, @cjt222.\r\n> \r\n> What is the structure of your JSON files. Please note that it is normally simpler if the data file... | 2023-05-30T02:55:26Z | 2023-07-24T12:00:38Z | 2023-07-24T12:00:38Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
Downloading and preparing dataset json/default to /home/kas/diffusers/examples/dreambooth/cache_data/datasets/json/default-acf423d8c6ef99d0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data files: 100%|██████████| 1/1 [00:00<00:00, 84.35it/s]
Extracting data files: 0%| | 0/1 [00:00<?, ?it/s] for _, table in generator:
File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 114, in _generate_tables
io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 258, in pyarrow._json.read_json
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 27.72it/s]
Generating train split: 0 examples [00:00, ? examples/s] File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 125, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2390448764
### Steps to reproduce the bug
1、data_files = ["1.json", "2.json", "3.json"]
2、dataset = load_dataset('json', data_files=data_files)
### Expected behavior
Read the dataset normally.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.15.0-29-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 1.3.5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5913/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5913/timeline | null | completed | null | null | 1,329.086667 | 1,728 |
https://api.github.com/repos/huggingface/datasets/issues/5912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5912/comments | https://api.github.com/repos/huggingface/datasets/issues/5912/events | https://github.com/huggingface/datasets/issues/5912 | 1,730,299,852 | I_kwDODunzps5nIkfM | 5,912 | Missing elements in `map` a batched dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4",
"events_url": "https://api.github.com/users/sachinruk/events{/privacy}",
"followers_url": "https://api.github.com/users/sachinruk/followers",
"following_url": "https://api.github.com/users/sachinruk/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinruk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sachinruk",
"id": 1410927,
"login": "sachinruk",
"node_id": "MDQ6VXNlcjE0MTA5Mjc=",
"organizations_url": "https://api.github.com/users/sachinruk/orgs",
"received_events_url": "https://api.github.com/users/sachinruk/received_events",
"repos_url": "https://api.github.com/users/sachinruk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sachinruk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinruk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sachinruk",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! in your code batching is **only used within** `map`, to process examples in batch. The dataset itself however is not batched and returns elements one by one.\r\n\r\nTo iterate on batches, you can do\r\n```python\r\nfor batch in dataset.iter(batch_size=8):\r\n ...\r\n```"
] | 2023-05-29T08:09:19Z | 2023-07-26T15:48:15Z | 2023-07-26T15:48:15Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
As outlined [here](https://discuss.huggingface.co/t/length-error-using-map-with-datasets/40969/3?u=sachin), the following collate function drops 5 out of possible 6 elements in the batch (it is 6 because out of the eight, two are bad links in laion). A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here.
The weirdest part is when inspecting the sizes of the tensors as shown below, both `tokenized_captions["input_ids"]` and `image_features` show the correct shapes. Simply the output only has one element (with the batch dimension squeezed out).
```python
class CollateFn:
def get_image(self, url):
try:
response = requests.get(url)
return Image.open(io.BytesIO(response.content)).convert("RGB")
except PIL.UnidentifiedImageError:
logger.info(f"Reading error: Could not transform f{url}")
return None
except requests.exceptions.ConnectionError:
logger.info(f"Connection error: Could not transform f{url}")
return None
def __call__(self, batch):
images = [self.get_image(url) for url in batch["url"]]
captions = [caption for caption, image in zip(batch["caption"], images) if image is not None]
images = [image for image in images if image is not None]
tokenized_captions = tokenizer(
captions,
padding="max_length",
truncation=True,
max_length=tokenizer.model_max_length,
return_tensors="pt",
)
image_features = torch.stack([torch.Tensor(feature_extractor(image)["pixel_values"][0]) for image in images])
# import pdb; pdb.set_trace()
return {"input_ids": tokenized_captions["input_ids"], "images": image_features}
collate_fn = CollateFn()
laion_ds = datasets.load_dataset("laion/laion400m", split="train", streaming=True)
laion_ds_batched = laion_ds.map(collate_fn, batched=True, batch_size=8, remove_columns=next(iter(laion_ds)).keys())
```
### Steps to reproduce the bug
A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here.
### Expected behavior
Would expect `next(iter(laion_ds_batched))` to produce two tensors of shape `(batch_size, 77)` and `batch_size, image_shape`.
### Environment info
datasets==2.12.0
python==3.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5912/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5912/timeline | null | completed | null | null | 1,399.648889 | 1,729 |
https://api.github.com/repos/huggingface/datasets/issues/5910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5910/comments | https://api.github.com/repos/huggingface/datasets/issues/5910/events | https://github.com/huggingface/datasets/issues/5910 | 1,728,909,790 | I_kwDODunzps5nDRHe | 5,910 | Cannot use both set_format and set_transform | {
"avatar_url": "https://avatars.githubusercontent.com/u/14046002?v=4",
"events_url": "https://api.github.com/users/ybouane/events{/privacy}",
"followers_url": "https://api.github.com/users/ybouane/followers",
"following_url": "https://api.github.com/users/ybouane/following{/other_user}",
"gists_url": "https://api.github.com/users/ybouane/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ybouane",
"id": 14046002,
"login": "ybouane",
"node_id": "MDQ6VXNlcjE0MDQ2MDAy",
"organizations_url": "https://api.github.com/users/ybouane/orgs",
"received_events_url": "https://api.github.com/users/ybouane/received_events",
"repos_url": "https://api.github.com/users/ybouane/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ybouane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ybouane/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ybouane",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Currently, it's not possible to chain `set_format`/`set_transform` calls (plus, this is a breaking change if we decide to implement it), so I see two possible solutions:\r\n* using `set_format`/`set_transform` for the 1st transform and then passing the transformed example/batch to the 2nd transform\r\n* implementi... | 2023-05-27T19:22:23Z | 2023-07-09T21:40:54Z | 2023-06-16T14:41:24Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I need to process some data using the set_transform method but I also need the data to be formatted for pytorch before processing it.
I don't see anywhere in the documentation something that says that both methods cannot be used at the same time.
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset("mnist", split="train")
ds.set_format(type="torch")
def transform(entry):
return entry["image"].double()
ds.set_transform(transform)
print(ds[0])
```
### Expected behavior
It should print the pytorch tensor image as a double, but it errors because "entry" in the transform function doesn't receive a pytorch tensor to begin with, it receives a PIL Image -> entry.double() errors because entry isn't a pytorch tensor.
### Environment info
Latest versions.
### Note:
It would be at least handy to have access to a function that can do the dataset.set_format in the set_transform function.
Something like:
```
from datasets import load_dataset, do_format
ds = load_dataset("mnist", split="train")
def transform(entry):
entry = do_format(entry, type="torch")
return entry["image"].double()
ds.set_transform(transform)
print(ds[0])
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/14046002?v=4",
"events_url": "https://api.github.com/users/ybouane/events{/privacy}",
"followers_url": "https://api.github.com/users/ybouane/followers",
"following_url": "https://api.github.com/users/ybouane/following{/other_user}",
"gists_url": "https://api.github.com/users/ybouane/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ybouane",
"id": 14046002,
"login": "ybouane",
"node_id": "MDQ6VXNlcjE0MDQ2MDAy",
"organizations_url": "https://api.github.com/users/ybouane/orgs",
"received_events_url": "https://api.github.com/users/ybouane/received_events",
"repos_url": "https://api.github.com/users/ybouane/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ybouane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ybouane/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ybouane",
"user_view_type": "public"
} | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5910/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5910/timeline | null | completed | null | null | 475.316944 | 1,730 |
https://api.github.com/repos/huggingface/datasets/issues/5906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5906/comments | https://api.github.com/repos/huggingface/datasets/issues/5906/events | https://github.com/huggingface/datasets/issues/5906 | 1,728,171,113 | I_kwDODunzps5nAcxp | 5,906 | Could you unpin responses version? | {
"avatar_url": "https://avatars.githubusercontent.com/u/47789026?v=4",
"events_url": "https://api.github.com/users/kenimou/events{/privacy}",
"followers_url": "https://api.github.com/users/kenimou/followers",
"following_url": "https://api.github.com/users/kenimou/following{/other_user}",
"gists_url": "https://api.github.com/users/kenimou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kenimou",
"id": 47789026,
"login": "kenimou",
"node_id": "MDQ6VXNlcjQ3Nzg5MDI2",
"organizations_url": "https://api.github.com/users/kenimou/orgs",
"received_events_url": "https://api.github.com/users/kenimou/received_events",
"repos_url": "https://api.github.com/users/kenimou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kenimou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenimou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kenimou",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2023-05-26T20:02:14Z | 2023-05-30T17:53:31Z | 2023-05-30T17:53:31Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version.
### Steps to reproduce the bug
could not install this library due to dependency conflict.
### Expected behavior
can install datasets
### Environment info
linux 64 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5906/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5906/timeline | null | completed | null | null | 93.854722 | 1,734 |
https://api.github.com/repos/huggingface/datasets/issues/5898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5898/comments | https://api.github.com/repos/huggingface/datasets/issues/5898/events | https://github.com/huggingface/datasets/issues/5898 | 1,726,190,481 | I_kwDODunzps5m45OR | 5,898 | Loading The flores data set for specific language | {
"avatar_url": "https://avatars.githubusercontent.com/u/36159918?v=4",
"events_url": "https://api.github.com/users/106AbdulBasit/events{/privacy}",
"followers_url": "https://api.github.com/users/106AbdulBasit/followers",
"following_url": "https://api.github.com/users/106AbdulBasit/following{/other_user}",
"gists_url": "https://api.github.com/users/106AbdulBasit/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/106AbdulBasit",
"id": 36159918,
"login": "106AbdulBasit",
"node_id": "MDQ6VXNlcjM2MTU5OTE4",
"organizations_url": "https://api.github.com/users/106AbdulBasit/orgs",
"received_events_url": "https://api.github.com/users/106AbdulBasit/received_events",
"repos_url": "https://api.github.com/users/106AbdulBasit/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/106AbdulBasit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/106AbdulBasit/subscriptions",
"type": "User",
"url": "https://api.github.com/users/106AbdulBasit",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"got that the syntax is like this\r\n\r\ndataset = load_dataset(\"facebook/flores\", \"ace_Arab\")"
] | 2023-05-25T17:08:55Z | 2023-05-25T17:21:38Z | 2023-05-25T17:21:37Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I am trying to load the Flores data set
the code which is given is
```
from datasets import load_dataset
dataset = load_dataset("facebook/flores")
```
This gives the error of config name
""ValueError: Config name is missing"
Now if I add some config it gives me the some error
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
How I can load the data of the specific language ?
Couldn't find any tutorial
any one can help me out?
### Steps to reproduce the bug
step one load the data set
`from datasets import load_dataset
dataset = load_dataset("facebook/flores")`
it gives the error of config
once config is given
it gives the error of
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
### Expected behavior
Data set should be loaded but I am receiving error
### Environment info
Datasets , python , | {
"avatar_url": "https://avatars.githubusercontent.com/u/36159918?v=4",
"events_url": "https://api.github.com/users/106AbdulBasit/events{/privacy}",
"followers_url": "https://api.github.com/users/106AbdulBasit/followers",
"following_url": "https://api.github.com/users/106AbdulBasit/following{/other_user}",
"gists_url": "https://api.github.com/users/106AbdulBasit/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/106AbdulBasit",
"id": 36159918,
"login": "106AbdulBasit",
"node_id": "MDQ6VXNlcjM2MTU5OTE4",
"organizations_url": "https://api.github.com/users/106AbdulBasit/orgs",
"received_events_url": "https://api.github.com/users/106AbdulBasit/received_events",
"repos_url": "https://api.github.com/users/106AbdulBasit/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/106AbdulBasit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/106AbdulBasit/subscriptions",
"type": "User",
"url": "https://api.github.com/users/106AbdulBasit",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5898/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5898/timeline | null | completed | null | null | 0.211667 | 1,742 |
https://api.github.com/repos/huggingface/datasets/issues/5896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5896/comments | https://api.github.com/repos/huggingface/datasets/issues/5896/events | https://github.com/huggingface/datasets/issues/5896 | 1,726,022,500 | I_kwDODunzps5m4QNk | 5,896 | HuggingFace does not cache downloaded files aggressively/early enough | {
"avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4",
"events_url": "https://api.github.com/users/jack-jjm/events{/privacy}",
"followers_url": "https://api.github.com/users/jack-jjm/followers",
"following_url": "https://api.github.com/users/jack-jjm/following{/other_user}",
"gists_url": "https://api.github.com/users/jack-jjm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jack-jjm",
"id": 2124157,
"login": "jack-jjm",
"node_id": "MDQ6VXNlcjIxMjQxNTc=",
"organizations_url": "https://api.github.com/users/jack-jjm/orgs",
"received_events_url": "https://api.github.com/users/jack-jjm/received_events",
"repos_url": "https://api.github.com/users/jack-jjm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jack-jjm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jack-jjm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jack-jjm",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I also faced this. Any update?",
"We've dropped the `apache-beam` dependency in https://huggingface.co/datasets/wikipedia/discussions/19, so you should no longer get this error."
] | 2023-05-25T15:14:36Z | 2024-03-15T15:36:07Z | 2024-03-15T15:36:07Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I wrote the following script:
```
import datasets
dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]")
```
I ran it and spent 90 minutes downloading a 20GB file. Then I saw:
```
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20.3G/20.3G [1:30:29<00:00, 3.73MB/s]
Traceback (most recent call last):
File "/home/jack/Code/Projects/Transformers/Codebase/main.py", line 5, in <module>
dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]")
File "/home/jack/.local/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset
builder_instance.download_and_prepare(
File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 883, in download_and_prepare
self._save_info()
File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 2037, in _save_info
import apache_beam as beam
ModuleNotFoundError: No module named 'apache_beam'
```
And the 20GB of data was seemingly instantly gone forever, because when I ran the script again, it had to do the download again.
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
datasets 2.10.1
Python 3.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5896/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5896/timeline | null | completed | null | null | 7,080.358611 | 1,744 |
https://api.github.com/repos/huggingface/datasets/issues/5895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5895/comments | https://api.github.com/repos/huggingface/datasets/issues/5895/events | https://github.com/huggingface/datasets/issues/5895 | 1,725,467,252 | I_kwDODunzps5m2Ip0 | 5,895 | The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/45357817?v=4",
"events_url": "https://api.github.com/users/DongHande/events{/privacy}",
"followers_url": "https://api.github.com/users/DongHande/followers",
"following_url": "https://api.github.com/users/DongHande/following{/other_user}",
"gists_url": "https://api.github.com/users/DongHande/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DongHande",
"id": 45357817,
"login": "DongHande",
"node_id": "MDQ6VXNlcjQ1MzU3ODE3",
"organizations_url": "https://api.github.com/users/DongHande/orgs",
"received_events_url": "https://api.github.com/users/DongHande/received_events",
"repos_url": "https://api.github.com/users/DongHande/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DongHande/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DongHande/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DongHande",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @DongHande.\r\n\r\nI think the issue is caused by the metadata in the dataset card: in the header of the `README.md`, they state that the dataset has 4 splits (\"finetune\", \"reward\", \"rl\", \"evaluation\"). \r\n```yaml\r\n splits:\r\n - name: finetune\r\n num_bytes: 6674567576\r\... | 2023-05-25T09:39:06Z | 2023-05-29T02:32:12Z | 2023-05-29T02:32:12Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When I load the ArmelR/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset.
When I use the script "datasets.load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)", it fails. But it succeeds when I add the "streaming = True" parameter.
The website of the dataset is https://huggingface.co/datasets/ArmelR/stack-exchange-instruction/ .
The traceback logs are as below:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 1706, in _prepare_split
split_info = self.info.splits[split_generator.name]
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/splits.py", line 530, in __getitem__
instructions = make_file_instructions(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 112, in make_file_instructions
name2filenames = {
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 113, in <dictcomp>
info.name: filenames_for_dataset_split(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 70, in filenames_for_dataset_split
prefix = filename_prefix_for_split(dataset_name, split)
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 54, in filename_prefix_for_split
if os.path.basename(name) != name:
File "/home/xxx/miniconda3/envs/code/lib/python3.9/posixpath.py", line 142, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
### Steps to reproduce the bug
1. import datasets library function: ```from datasets import load_dataset```
2. load dataset: ```ds=load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)```
### Expected behavior
The dataset can be loaded successfully without the streaming setting.
### Environment info
Linux,
python=3.9
datasets=2.12.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/45357817?v=4",
"events_url": "https://api.github.com/users/DongHande/events{/privacy}",
"followers_url": "https://api.github.com/users/DongHande/followers",
"following_url": "https://api.github.com/users/DongHande/following{/other_user}",
"gists_url": "https://api.github.com/users/DongHande/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DongHande",
"id": 45357817,
"login": "DongHande",
"node_id": "MDQ6VXNlcjQ1MzU3ODE3",
"organizations_url": "https://api.github.com/users/DongHande/orgs",
"received_events_url": "https://api.github.com/users/DongHande/received_events",
"repos_url": "https://api.github.com/users/DongHande/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DongHande/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DongHande/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DongHande",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5895/timeline | null | completed | null | null | 88.885 | 1,745 |
https://api.github.com/repos/huggingface/datasets/issues/5892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5892/comments | https://api.github.com/repos/huggingface/datasets/issues/5892/events | https://github.com/huggingface/datasets/issues/5892 | 1,722,503,824 | I_kwDODunzps5mq1KQ | 5,892 | User access requests with manual review do not notify the dataset owner | {
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leondz",
"id": 121934,
"login": "leondz",
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"repos_url": "https://api.github.com/users/leondz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leondz",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"cc @SBrandeis",
"I think this has been addressed.\r\n\r\nPlease open a new issue if you are still not getting notified."
] | 2023-05-23T17:27:46Z | 2023-07-21T13:55:37Z | 2023-07-21T13:55:36Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that particular dataset's Settings pane.
### Steps to reproduce the bug
1. Enable a dataset's user access requests
2. Set to Manual Review
3. Ask another HF user to request access to the dataset
4. Dataset owner is not notified
### Expected behavior
The dataset owner should receive some kind of notification, perhaps in their HF site inbox, or by email, when a dataset access request is made and manual review is enabled.
### Environment info
n/a | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5892/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5892/timeline | null | completed | null | null | 1,412.463889 | 1,748 |
https://api.github.com/repos/huggingface/datasets/issues/5887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5887/comments | https://api.github.com/repos/huggingface/datasets/issues/5887/events | https://github.com/huggingface/datasets/issues/5887 | 1,722,166,382 | I_kwDODunzps5mpixu | 5,887 | HuggingsFace dataset example give error | {
"avatar_url": "https://avatars.githubusercontent.com/u/1328316?v=4",
"events_url": "https://api.github.com/users/donhuvy/events{/privacy}",
"followers_url": "https://api.github.com/users/donhuvy/followers",
"following_url": "https://api.github.com/users/donhuvy/following{/other_user}",
"gists_url": "https://api.github.com/users/donhuvy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/donhuvy",
"id": 1328316,
"login": "donhuvy",
"node_id": "MDQ6VXNlcjEzMjgzMTY=",
"organizations_url": "https://api.github.com/users/donhuvy/orgs",
"received_events_url": "https://api.github.com/users/donhuvy/received_events",
"repos_url": "https://api.github.com/users/donhuvy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/donhuvy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donhuvy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/donhuvy",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}"... | null | [
"Nice catch @donhuvy, that's because some models don't need the `token_type_ids`, as in this case, as the example is using `distilbert-base-cased`, and according to the DistilBert documentation at https://huggingface.co/transformers/v3.0.2/model_doc/distilbert.html, `DistilBert doesn’t have token_type_ids, you don’... | 2023-05-23T14:09:05Z | 2023-07-25T14:01:01Z | 2023-07-25T14:01:00Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug


### Steps to reproduce the bug
Use link as reference document written https://colab.research.google.com/github/huggingface/datasets/blob/main/notebooks/Overview.ipynb#scrollTo=biqDH9vpvSVz
```python
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
if i > 5:
break
```
Error
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-44-7040b885f382>](https://localhost:8080/#) in <cell line: 5>()
5 for i, batch in enumerate(dataloader):
6 batch.to(device)
----> 7 outputs = model(**batch)
8 loss = outputs.loss
9 loss.backward()
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: DistilBertForQuestionAnswering.forward() got an unexpected keyword argument 'token_type_ids'
```
https://github.com/huggingface/datasets/assets/1328316/5d8b1d61-9337-4d59-8423-4f37f834c156
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free (my Google Drive just empty about 200 MB, but I don't think it cause problem) | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5887/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5887/timeline | null | completed | null | null | 1,511.865278 | 1,751 |
https://api.github.com/repos/huggingface/datasets/issues/5884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5884/comments | https://api.github.com/repos/huggingface/datasets/issues/5884/events | https://github.com/huggingface/datasets/issues/5884 | 1,719,548,172 | I_kwDODunzps5mfjkM | 5,884 | `Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}"... | null | [
"May eventually be solved in #5883 ",
"#self-assign"
] | 2023-05-22T12:03:06Z | 2023-06-09T16:04:56Z | 2023-06-09T16:04:55Z | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `é` character `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)`.
### Steps to reproduce the bug
Running the following script will eventually fail, when reaching to the batch that contains non-ASCII compatible strings.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
>>> UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)
```
### Expected behavior
The following script to run properly, making sure that the strings are either `numpy.unicode_` or `numpy.string` instead of `numpy.bytes_` since some characters are not ASCII compatible and that would lead to an issue when applying the `map`.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
```
### Environment info
- `datasets` version: 2.12.1.dev0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5884/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5884/timeline | null | completed | null | null | 436.030278 | 1,755 |
https://api.github.com/repos/huggingface/datasets/issues/5876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5876/comments | https://api.github.com/repos/huggingface/datasets/issues/5876/events | https://github.com/huggingface/datasets/issues/5876 | 1,717,978,985 | I_kwDODunzps5mZkdp | 5,876 | Incompatibility with DataLab | {
"avatar_url": "https://avatars.githubusercontent.com/u/26192135?v=4",
"events_url": "https://api.github.com/users/helpmefindaname/events{/privacy}",
"followers_url": "https://api.github.com/users/helpmefindaname/followers",
"following_url": "https://api.github.com/users/helpmefindaname/following{/other_user}",
"gists_url": "https://api.github.com/users/helpmefindaname/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/helpmefindaname",
"id": 26192135,
"login": "helpmefindaname",
"node_id": "MDQ6VXNlcjI2MTkyMTM1",
"organizations_url": "https://api.github.com/users/helpmefindaname/orgs",
"received_events_url": "https://api.github.com/users/helpmefindaname/received_events",
"repos_url": "https://api.github.com/users/helpmefindaname/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/helpmefindaname/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helpmefindaname/subscriptions",
"type": "User",
"url": "https://api.github.com/users/helpmefindaname",
"user_view_type": "public"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | [
"Indeed, `clobber=True` (with a warning if the existing protocol will be overwritten) should fix the issue, but maybe a better solution is to register our compression filesystem before the script is executed and unregister them afterward. WDYT @lhoestq @albertvillanova?",
"I think we should use clobber and show a... | 2023-05-20T01:39:11Z | 2023-05-25T06:42:34Z | 2023-05-25T06:42:34Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hello,
I am currently working on a project where both [DataLab](https://github.com/ExpressAI/DataLab) and [datasets](https://github.com/huggingface/datasets) are subdependencies.
I noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSystems not being registered before.
When running the code below, I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\__init__.py", line 28, in <module>
from datalabs.arrow_dataset import concatenate_datasets, Dataset
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_dataset.py", line 60, in <module>
from datalabs.arrow_writer import ArrowWriter, OptimizedTypedSequence
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_writer.py", line 28, in <module>
from datalabs.features import (
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\__init__.py", line 2, in <module>
from datalabs.features.audio import Audio
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\audio.py", line 21, in <module>
from datalabs.utils.streaming_download_manager import xopen
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\utils\streaming_download_manager.py", line 16, in <module>
from datalabs.filesystems import COMPRESSION_FILESYSTEMS
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\filesystems\__init__.py", line 37, in <module>
fsspec.register_implementation(fs_class.protocol, fs_class)
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\fsspec\registry.py", line 51, in register_implementation
raise ValueError(
ValueError: Name (bz2) already in the registry and clobber is False
```
I think as simple solution would be to just set `clobber=True` in https://github.com/huggingface/datasets/blob/main/src/datasets/filesystems/__init__.py#L28. This allows the register to discard previous registrations. This should work, as the datalabs FileSystems are copies of the datasets FileSystems. However, I don't know if it is guaranteed to be compatible with other libraries that might use the same protocols.
I am linking the symmetric issue on [DataLab](https://github.com/ExpressAI/DataLab/issues/425) as ideally the issue is solved in both libraries the same way. Otherwise, it could lead to different behaviors depending on which library gets imported first.
### Steps to reproduce the bug
1. Run `pip install datalabs==0.4.15 datasets==2.12.0`
2. Run the following python code:
```
import datalabs
import datasets
```
### Expected behavior
It should be possible to import both libraries without getting a Value Error
### Environment info
datalabs==0.4.15
datasets==2.12.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5876/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5876/timeline | null | completed | null | null | 125.056389 | 1,761 |
https://api.github.com/repos/huggingface/datasets/issues/5875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5875/comments | https://api.github.com/repos/huggingface/datasets/issues/5875/events | https://github.com/huggingface/datasets/issues/5875 | 1,716,770,394 | I_kwDODunzps5mU9Za | 5,875 | Why split slicing doesn't behave like list slicing ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul",
"user_view_type": "public"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | [
"A duplicate of https://github.com/huggingface/datasets/issues/1774"
] | 2023-05-19T07:21:10Z | 2024-01-31T15:54:18Z | 2024-01-31T15:54:18Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
If I want to get the first 10 samples of my dataset, I can do :
```
ds = datasets.load_dataset('mnist', split='train[:10]')
```
But if I exceed the number of samples in the dataset, an exception is raised :
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
> ValueError: Requested slice [:999999999] incompatible with 60000 examples.
### Steps to reproduce the bug
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
### Expected behavior
I would expect it to behave like python lists (no exception raised, the whole list is kept) :
```
d = list(range(1000))[:999999]
print(len(d)) # > 1000
```
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5875/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5875/timeline | null | completed | null | null | 6,176.552222 | 1,762 |
https://api.github.com/repos/huggingface/datasets/issues/5874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5874/comments | https://api.github.com/repos/huggingface/datasets/issues/5874/events | https://github.com/huggingface/datasets/issues/5874 | 1,715,708,930 | I_kwDODunzps5mQ6QC | 5,874 | Using as_dataset on a "parquet" builder | {
"avatar_url": "https://avatars.githubusercontent.com/u/9039058?v=4",
"events_url": "https://api.github.com/users/rems75/events{/privacy}",
"followers_url": "https://api.github.com/users/rems75/followers",
"following_url": "https://api.github.com/users/rems75/following{/other_user}",
"gists_url": "https://api.github.com/users/rems75/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rems75",
"id": 9039058,
"login": "rems75",
"node_id": "MDQ6VXNlcjkwMzkwNTg=",
"organizations_url": "https://api.github.com/users/rems75/orgs",
"received_events_url": "https://api.github.com/users/rems75/received_events",
"repos_url": "https://api.github.com/users/rems75/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rems75/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rems75/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rems75",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi! You can refer to [this doc](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) to see the intended usage (basically, it skips the Arrow -> Parquet conversion step in `ds = load_dataset(...); ds.to_parquet(\"path/to/parquet\")`) and allows writing P... | 2023-05-18T14:09:03Z | 2023-05-31T13:23:55Z | 2023-05-31T13:23:55Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I used a custom builder to ``download_and_prepare`` a dataset. The first (very minor) issue is that the doc seems to suggest ``download_and_prepare`` will return the dataset, while it does not ([builder.py](https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L718-L738)).
```
>>> from datasets import load_dataset_builder
>>> builder = load_dataset_builder("rotten_tomatoes")
>>> ds = builder.download_and_prepare("./output_dir", file_format="parquet")
```
The main issue I am facing is loading the dataset from those parquet files. I used the `as_dataset` method suggested by the doc, however it returns:
`
FileNotFoundError: [Errno 2] Failed to open local file 'output_dir/__main__-train-00000-of-00245.arrow'. Detail:
[errno 2] No such file or directory.
`
### Steps to reproduce the bug
1. Create a custom builder of some sort: `builder = CustomBuilder()`.
2. Run `download_and_prepare` with the parquet format: `builder.download_and_prepare("./output_dir", file_format="parquet")`.
3. Run `dataset = builder.as_dataset()`.
### Expected behavior
I guess I'd expect `as_dataset` to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with `load_dataset` to no avail, probably due to misunderstandings on my part).
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-1027-gcp-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.14.1
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5874/timeline | null | completed | null | null | 311.247778 | 1,763 |
https://api.github.com/repos/huggingface/datasets/issues/5871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5871/comments | https://api.github.com/repos/huggingface/datasets/issues/5871/events | https://github.com/huggingface/datasets/issues/5871 | 1,712,573,073 | I_kwDODunzps5mE8qR | 5,871 | data configuration hash suffix depends on uncanonicalized data_dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kylrth",
"id": 5044802,
"login": "kylrth",
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"repos_url": "https://api.github.com/users/kylrth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kylrth",
"user_view_type": "public"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kylrth",
"id": 5044802,
"login": "kylrth",
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"repos_url": "https://api.github.com/users/kylrth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kylrth",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url... | null | [
"It could even use `os.path.realpath` to resolve symlinks.",
"Indeed, it makes sense to normalize `data_dir`. Feel free to submit a PR (this can be \"fixed\" [here](https://github.com/huggingface/datasets/blob/89f775226321ba94e5bf4670a323c0fb44f5f65c/src/datasets/builder.py#L173))",
"#self-assign"
] | 2023-05-16T18:56:04Z | 2023-06-02T15:52:05Z | 2023-06-02T15:52:05Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I am working with the `recipe_nlg` dataset, which requires manual download. Once it's downloaded, I've noticed that the hash in the custom data configuration is different if I add a trailing `/` to my `data_dir`. It took me a while to notice that the hashes were different, and to understand that that was the cause of my dataset being processed anew instead of the cached version being used.
### Steps to reproduce the bug
1. Follow the steps to manually download the `recipe_nlg` dataset to `/data/recipenlg`.
2. Load it using `load_dataset`, once without a trailing slash and once with one:
```python
>>> ds = load_dataset("recipe_nlg", data_dir="/data/recipenlg")
Using custom data configuration default-082278caeea85765
Downloading and preparing dataset recipe_nlg/default to /home/kyle/.cache/huggingface/datasets/recipe_nlg/default-082278caeea85765/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74...
Dataset recipe_nlg downloaded and prepared to /home/kyle/.cache/huggingface/datasets/recipe_nlg/default-082278caeea85765/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74. Subsequent calls will reuse this data.
100%|███████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.10s/it]
DatasetDict({
train: Dataset({
features: ['id', 'title', 'ingredients', 'directions', 'link', 'source', 'ner'],
num_rows: 2231142
})
})
>>> ds = load_dataset("recipe_nlg", data_dir="/data/recipenlg/")
Using custom data configuration default-83e87680785d0493
Downloading and preparing dataset recipe_nlg/default to /home/user/.cache/huggingface/datasets/recipe_nlg/default-83e87680785d0493/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74...
Generating train split: 1%| | 12701/2231142 [00:04<13:15, 2790.25 examples/s
^C
```
3. Observe that the hash suffix in the custom data configuration changes due to the altered string.
### Expected behavior
I think I would expect the hash to remain constant if it actually points to the same location on disk. I would expect the use of `os.path.normpath` to canonicalize the paths.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5871/timeline | null | completed | null | null | 404.933611 | 1,766 |
https://api.github.com/repos/huggingface/datasets/issues/5869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5869/comments | https://api.github.com/repos/huggingface/datasets/issues/5869/events | https://github.com/huggingface/datasets/issues/5869 | 1,711,990,003 | I_kwDODunzps5mCuTz | 5,869 | Image Encoding Issue when submitting a Parquet Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47530815?v=4",
"events_url": "https://api.github.com/users/PhilippeMoussalli/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilippeMoussalli/followers",
"following_url": "https://api.github.com/users/PhilippeMoussalli/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilippeMoussalli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilippeMoussalli",
"id": 47530815,
"login": "PhilippeMoussalli",
"node_id": "MDQ6VXNlcjQ3NTMwODE1",
"organizations_url": "https://api.github.com/users/PhilippeMoussalli/orgs",
"received_events_url": "https://api.github.com/users/PhilippeMoussalli/received_events",
"repos_url": "https://api.github.com/users/PhilippeMoussalli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilippeMoussalli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilippeMoussalli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilippeMoussalli",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @PhilippeMoussalli thanks for opening a detailed issue. It seems the issue is more related to the `datasets` library so I'll ping @lhoestq @mariosasko on this one :) \n\n(edit: also can one of you move the issue to the datasets repo? Thanks in advance 🙏)",
"Hi ! The `Image()` info is stored in the **schema m... | 2023-05-16T09:42:58Z | 2023-06-16T12:48:38Z | 2023-06-16T09:30:48Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hello,
I'd like to report an issue related to pushing a dataset represented as a Parquet file to a dataset repository using Dask. Here are the details:
We attempted to load an example dataset in Parquet format from the Hugging Face (HF) filesystem using Dask with the following code snippet:
```
import dask.dataframe as dd
df = dd.read_parquet("hf://datasets/lambdalabs/pokemon-blip-captions",index=False)
```
In this dataset, the "image" column is represented as a dictionary/struct with the format:
```
df = df.compute()
df["image"].iloc[0].keys()
-> dict_keys(['bytes', 'path'])
```
I think this is the format encoded by the [`Image`](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.Image) feature extractor from datasets to format suitable for Arrow.
The next step was to push the dataset to a repository that I created:
```
dd.to_parquet(dask_df, path = "hf://datasets/philippemo/dummy_dataset/data")
```
However, after pushing the dataset using Dask, the "image" column is now represented as the encoded dictionary `(['bytes', 'path'])`, and the images are not properly visualized. You can find the dataset here: [Link to the problematic dataset](https://huggingface.co/datasets/philippemo/dummy_dataset).
It's worth noting that both the original dataset and the one submitted with Dask have the same schema with minor alterations related to metadata:
**[ Schema of original dummy example.](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions/blob/main/data/train-00000-of-00001-566cc9b19d7203f8.parquet)**
```
image: struct<bytes: binary, path: null>
child 0, bytes: binary
child 1, path: null
text: string
```
**[ Schema of pushed dataset with dask](https://huggingface.co/datasets/philippemo/dummy_dataset/blob/main/data/part.0.parquet)**
```
image: struct<bytes: binary, path: null>
child 0, bytes: binary
child 1, path: null
text: string
```
This issue seems to be related to an encoding type that occurs when pushing a model to the hub. Normally, models should be represented as an HF dataset before pushing, but we are working with an example where we need to push large datasets using Dask.
Could you please provide clarification on how to resolve this issue?
Thank you!
### Reproduction
To get the schema I downloaded the parquet files and used pyarrow.parquet to read the schema
```
import pyarrow.parquet
pyarrow.parquet.read_schema(<path_to_parquet>, memory_map=True)
```
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.14.1
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/philippe/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: philippemo
- Configured git credential helpers: cache
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.4.0
- hf_transfer: N/A
- gradio: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/philippe/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/philippe/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/philippe/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5869/timeline | null | completed | null | null | 743.797222 | 1,768 |
https://api.github.com/repos/huggingface/datasets/issues/5868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5868/comments | https://api.github.com/repos/huggingface/datasets/issues/5868/events | https://github.com/huggingface/datasets/issues/5868 | 1,711,173,098 | I_kwDODunzps5l_m3q | 5,868 | Is it possible to change a cached file and 're-cache' it instead of re-generating? | {
"avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4",
"events_url": "https://api.github.com/users/zyh3826/events{/privacy}",
"followers_url": "https://api.github.com/users/zyh3826/followers",
"following_url": "https://api.github.com/users/zyh3826/following{/other_user}",
"gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zyh3826",
"id": 31238754,
"login": "zyh3826",
"node_id": "MDQ6VXNlcjMxMjM4NzU0",
"organizations_url": "https://api.github.com/users/zyh3826/orgs",
"received_events_url": "https://api.github.com/users/zyh3826/received_events",
"repos_url": "https://api.github.com/users/zyh3826/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zyh3826",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Arrow files/primitives (tables and arrays) are immutable, so re-generating them is the only option, I'm afraid.",
"> \r\n\r\nGot it, thanks for your reply"
] | 2023-05-16T03:45:42Z | 2023-05-17T11:21:36Z | 2023-05-17T11:21:36Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it.
### Your contribution
For now, I can't help, sorry. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5868/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5868/timeline | null | completed | null | null | 31.598333 | 1,769 |
https://api.github.com/repos/huggingface/datasets/issues/5866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5866/comments | https://api.github.com/repos/huggingface/datasets/issues/5866/events | https://github.com/huggingface/datasets/issues/5866 | 1,710,496,993 | I_kwDODunzps5l9Bzh | 5,866 | Issue with Sequence features | {
"avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4",
"events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}",
"followers_url": "https://api.github.com/users/alialamiidrissi/followers",
"following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}",
"gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alialamiidrissi",
"id": 14365168,
"login": "alialamiidrissi",
"node_id": "MDQ6VXNlcjE0MzY1MTY4",
"organizations_url": "https://api.github.com/users/alialamiidrissi/orgs",
"received_events_url": "https://api.github.com/users/alialamiidrissi/received_events",
"repos_url": "https://api.github.com/users/alialamiidrissi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alialamiidrissi",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix."
] | 2023-05-15T17:13:29Z | 2023-05-26T11:57:17Z | 2023-05-26T11:57:17Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Sequences features sometimes causes errors when the specified length is not -1
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Features, ClassLabel, Sequence, Value, Dataset
feats = Features(**{'target': ClassLabel(names=[0, 1]),'x': Sequence(feature=Value(dtype='float64',id=None), length=2, id=None)})
Dataset.from_dict({"target": np.ones(2000).astype(int), "x": np.random.rand(2000,2)},features = feats).flatten_indices()
```
Throws:
```
TypeError: Couldn't cast array of type
fixed_size_list<item: double>[2]
to
Sequence(feature=Value(dtype='float64', id=None), length=2, id=None)
```
The same code works without any issues when `length = -1`
EDIT: The error seems to happen only when the length of the dataset is bigger than 1000 for some reason
### Expected behavior
No exception
### Environment info
- `datasets` version: 2.10.1
- Python version: 3.9.5
- PyArrow version: 11.0.0
- Pandas version: 1.4.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5866/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5866/timeline | null | completed | null | null | 258.73 | 1,771 |
https://api.github.com/repos/huggingface/datasets/issues/5858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5858/comments | https://api.github.com/repos/huggingface/datasets/issues/5858/events | https://github.com/huggingface/datasets/issues/5858 | 1,709,332,632 | I_kwDODunzps5l4liY | 5,858 | Throw an error when dataset improperly indexed | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @sarahwie.\r\n\r\nPlease note that in `datasets` we do not have vectorized operation like `pandas`. Therefore, your equality comparisons above are `False`:\r\n- For example: `squad['question']` returns a `list`, and this list is not equal to `\"Who was the Norse leader?\"`\r\n\r\nThe `False` ... | 2023-05-15T05:15:53Z | 2023-05-25T16:23:19Z | 2023-05-25T16:23:19Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Pandas-style subset indexing on dataset does not throw an error, when maybe it should. Instead returns the first instance of the dataset regardless of index condition.
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. `squad = datasets.load_dataset("squad_v2", split="validation")`
2. `item = squad[squad['question'] == "Who was the Norse leader?"]`
or `it = squad[squad['id'] == '56ddde6b9a695914005b962b']`
3. returns the first item in the dataset, which does not satisfy the above conditions:
`{'id': '56ddde6b9a695914005b9628', 'title': 'Normans', 'context': 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ("Norman" comes from "Norseman") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.', 'question': 'In what country is Normandy located?', 'answers': {'text': ['France', 'France', 'France', 'France'], 'answer_start': [159, 159, 159, 159]}}`
### Expected behavior
Should either throw an error message, or return the dataset item that satisfies the condition.
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5858/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5858/timeline | null | completed | null | null | 251.123889 | 1,779 |
https://api.github.com/repos/huggingface/datasets/issues/5857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5857/comments | https://api.github.com/repos/huggingface/datasets/issues/5857/events | https://github.com/huggingface/datasets/issues/5857 | 1,709,326,622 | I_kwDODunzps5l4kEe | 5,857 | Adding chemistry dataset/models in huggingface | {
"avatar_url": "https://avatars.githubusercontent.com/u/16902896?v=4",
"events_url": "https://api.github.com/users/knc6/events{/privacy}",
"followers_url": "https://api.github.com/users/knc6/followers",
"following_url": "https://api.github.com/users/knc6/following{/other_user}",
"gists_url": "https://api.github.com/users/knc6/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/knc6",
"id": 16902896,
"login": "knc6",
"node_id": "MDQ6VXNlcjE2OTAyODk2",
"organizations_url": "https://api.github.com/users/knc6/orgs",
"received_events_url": "https://api.github.com/users/knc6/received_events",
"repos_url": "https://api.github.com/users/knc6/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/knc6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/knc6/subscriptions",
"type": "User",
"url": "https://api.github.com/users/knc6",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\nThis would be a nice addition to the Hub! You can find the existing chemistry datasets/models on the Hub (using the `chemistry` tag) [here](https://huggingface.co/search/full-text?q=chemistry&type=model&type=dataset).\r\n\r\nFeel free to ping us here on the Hub if you need help adding the datasets.\r\n... | 2023-05-15T05:09:49Z | 2023-07-21T13:45:40Z | 2023-07-21T13:45:40Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
Huggingface is really amazing platform for open science.
In addition to computer vision, video and NLP, would it be of interest to add chemistry/materials science dataset/models in Huggingface? Or, if its already done, can you provide some pointers.
We have been working on a comprehensive benchmark on this topic: [JARVIS-Leaderboard](https://pages.nist.gov/jarvis_leaderboard/) and I am wondering if we could contribute/integrate this project as a part of huggingface.
### Motivation
Similar to the main stream AI field, there is need of large scale benchmarks/models/infrastructure for chemistry/materials data.
### Your contribution
We can start adding datasets as our [benchmarks](https://github.com/usnistgov/jarvis_leaderboard/tree/main/jarvis_leaderboard/benchmarks) should be easily convertible to the dataset format. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5857/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5857/timeline | null | completed | null | null | 1,616.5975 | 1,780 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.