Data
timm.data.create_dataset
< source >( name root split = 'validation' search_split = True class_map = None load_bytes = False is_training = False download = False batch_size = None seed = 42 repeats = 0 **kwargs )
Dataset factory method
In parenthesis after each arg are the type of dataset supported for each arg, one of:
- folder - default, timm folder (or tar) based ImageDataset
- torch - torchvision based datasets
- HFDS - Model Database Datasets
- TFDS - Tensorflow-datasets wrapper in IterabeDataset interface via IterableImageDataset
- WDS - Webdataset
- all - any of the above
timm.data.create_loader
< source >( dataset input_size batch_size is_training = False use_prefetcher = True no_aug = False re_prob = 0.0 re_mode = 'const' re_count = 1 re_split = False scale = None ratio = None hflip = 0.5 vflip = 0.0 color_jitter = 0.4 auto_augment = None num_aug_repeats = 0 num_aug_splits = 0 interpolation = 'bilinear' mean = (0.485, 0.456, 0.406) std = (0.229, 0.224, 0.225) num_workers = 1 distributed = False crop_pct = None crop_mode = None collate_fn = None pin_memory = False fp16 = False img_dtype = torch.float32 device = device(type='cuda') tf_preprocessing = False use_multi_epochs_loader = False persistent_workers = True worker_seeding = 'all' )
timm.data.create_transform
< source >( input_size is_training = False use_prefetcher = False no_aug = False scale = None ratio = None hflip = 0.5 vflip = 0.0 color_jitter = 0.4 auto_augment = None interpolation = 'bilinear' mean = (0.485, 0.456, 0.406) std = (0.229, 0.224, 0.225) re_prob = 0.0 re_mode = 'const' re_count = 1 re_num_splits = 0 crop_pct = None crop_mode = None tf_preprocessing = False separate = False )
timm.data.resolve_data_config
< source >( args = None pretrained_cfg = None model = None use_test_size = False verbose = False )