查看torch中的所有函数

简介: hspmmhstackhubhypoti0i0_igammaigammaciinfoimagimport_ir_moduleimport_ir_module_from_bufferindex_addindex_copyindex_fillindex_putindex_put_index_selectinit_num_t
import torch
s = dir(torch)
for i in s:
    print(i)


结果

AVG
AggregationType
AnyType
Argument
ArgumentSpec
BFloat16Storage
BFloat16Tensor
BenchmarkConfig
BenchmarkExecutionStats
Block
BoolStorage
BoolTensor
BoolType
BufferDict
ByteStorage
ByteTensor
CONV_BN_FUSION
CallStack
Capsule
CharStorage
CharTensor
ClassType
Code
CompilationUnit
CompleteArgumentSpec
ComplexDoubleStorage
ComplexFloatStorage
ComplexType
ConcreteModuleType
ConcreteModuleTypeBuilder
CudaBFloat16StorageBase
CudaBoolStorageBase
CudaByteStorageBase
CudaCharStorageBase
CudaComplexDoubleStorageBase
CudaComplexFloatStorageBase
CudaDoubleStorageBase
CudaFloatStorageBase
CudaHalfStorageBase
CudaIntStorageBase
CudaLongStorageBase
CudaShortStorageBase
DeepCopyMemoTable
DeviceObjType
DictType
DisableTorchFunction
DoubleStorage
DoubleTensor
EnumType
ErrorReport
ExecutionPlan
FUSE_ADD_RELU
FatalError
FileCheck
FloatStorage
FloatTensor
FloatType
FunctionSchema
Future
FutureType
Generator
Gradient
Graph
GraphExecutorState
HOIST_CONV_PACKED_PARAMS
HalfStorage
HalfStorageBase
HalfTensor
INSERT_FOLD_PREPACK_OPS
IODescriptor
InferredType
IntStorage
IntTensor
IntType
InterfaceType
JITException
ListType
LiteScriptModule
LockingLogger
LoggerBase
LongStorage
LongTensor
MobileOptimizerType
ModuleDict
Node
NoneType
NoopLogger
NumberType
OptionalType
ParameterDict
PyObjectType
PyTorchFileReader
PyTorchFileWriter
QInt32Storage
QInt32StorageBase
QInt8Storage
QInt8StorageBase
QUInt4x2Storage
QUInt8Storage
REMOVE_DROPOUT
RRefType
SUM
ScriptClass
ScriptFunction
ScriptMethod
ScriptModule
ScriptObject
Set
ShortStorage
ShortTensor
Size
StaticRuntime
Storage
Stream
StreamObjType
StringType
TYPE_CHECKING
Tensor
TensorType
ThroughputBenchmark
TracingState
TupleType
Type
USE_GLOBAL_DEPS
USE_RTLD_GLOBAL_WITH_LIBTORCH
Use
Value
_C
_StorageBase
_VF
__all__
__annotations__
__builtins__
__cached__
__config__
__doc__
__file__
__future__
__loader__
__name__
__package__
__path__
__spec__
__version__
_adaptive_avg_pool2d
_add_batch_dim
_add_relu
_add_relu_
_addmv_impl_
_aminmax
_amp_foreach_non_finite_check_and_unscale_
_amp_update_scale
_assert
_autograd_functions
_baddbmm_mkl_
_batch_norm_impl_index
_bmm
_cast_Byte
_cast_Char
_cast_Double
_cast_Float
_cast_Half
_cast_Int
_cast_Long
_cast_Short
_cat
_choose_qparams_per_tensor
_classes
_compute_linear_combination
_conj
_convolution
_convolution_nogroup
_copy_from
_ctc_loss
_cudnn_ctc_loss
_cudnn_init_dropout_state
_cudnn_rnn
_cudnn_rnn_flatten_weight
_cufft_clear_plan_cache
_cufft_get_plan_cache_max_size
_cufft_get_plan_cache_size
_cufft_set_plan_cache_max_size
_cummax_helper
_cummin_helper
_debug_has_internal_overlap
_dim_arange
_dirichlet_grad
_embedding_bag
_embedding_bag_forward_only
_empty_affine_quantized
_empty_per_channel_affine_quantized
_euclidean_dist
_fake_quantize_learnable_per_channel_affine
_fake_quantize_learnable_per_tensor_affine
_fft_c2c
_fft_c2r
_fft_r2c
_foreach_abs
_foreach_abs_
_foreach_acos
_foreach_acos_
_foreach_add
_foreach_add_
_foreach_addcdiv
_foreach_addcdiv_
_foreach_addcmul
_foreach_addcmul_
_foreach_asin
_foreach_asin_
_foreach_atan
_foreach_atan_
_foreach_ceil
_foreach_ceil_
_foreach_cos
_foreach_cos_
_foreach_cosh
_foreach_cosh_
_foreach_div
_foreach_div_
_foreach_erf
_foreach_erf_
_foreach_erfc
_foreach_erfc_
_foreach_exp
_foreach_exp_
_foreach_expm1
_foreach_expm1_
_foreach_floor
_foreach_floor_
_foreach_frac
_foreach_frac_
_foreach_lgamma
_foreach_lgamma_
_foreach_log
_foreach_log10
_foreach_log10_
_foreach_log1p
_foreach_log1p_
_foreach_log2
_foreach_log2_
_foreach_log_
_foreach_maximum
_foreach_minimum
_foreach_mul
_foreach_mul_
_foreach_neg
_foreach_neg_
_foreach_reciprocal
_foreach_reciprocal_
_foreach_round
_foreach_round_
_foreach_sigmoid
_foreach_sigmoid_
_foreach_sin
_foreach_sin_
_foreach_sinh
_foreach_sinh_
_foreach_sqrt
_foreach_sqrt_
_foreach_sub
_foreach_sub_
_foreach_tan
_foreach_tan_
_foreach_tanh
_foreach_tanh_
_foreach_trunc
_foreach_trunc_
_foreach_zero_
_fused_dropout
_grid_sampler_2d_cpu_fallback
_has_compatible_shallow_copy_type
_import_dotted_name
_index_copy_
_index_put_impl_
_initExtension
_jit_internal
_linalg_inv_out_helper_
_linalg_qr_helper
_linalg_solve_out_helper_
_linalg_utils
_load_global_deps
_lobpcg
_log_softmax
_log_softmax_backward_data
_logcumsumexp
_lowrank
_lu_solve_helper
_lu_with_info
_make_dual
_make_per_channel_quantized_tensor
_make_per_tensor_quantized_tensor
_masked_scale
_mkldnn
_mkldnn_reshape
_mkldnn_transpose
_mkldnn_transpose_
_mode
_namedtensor_internals
_nnpack_available
_nnpack_spatial_convolution
_ops
_pack_padded_sequence
_pad_packed_sequence
_remove_batch_dim
_reshape_from_tensor
_rowwise_prune
_s_where
_sample_dirichlet
_saturate_weight_to_fp16
_shape_as_tensor
_six
_sobol_engine_draw
_sobol_engine_ff_
_sobol_engine_initialize_state_
_sobol_engine_scramble_
_softmax
_softmax_backward_data
_sparse_addmm
_sparse_coo_tensor_unsafe
_sparse_log_softmax
_sparse_log_softmax_backward_data
_sparse_matrix_mask_helper
_sparse_mm
_sparse_softmax
_sparse_softmax_backward_data
_sparse_sparse_matmul
_sparse_sum
_stack
_standard_gamma
_standard_gamma_grad
_std
_storage_classes
_string_classes
_syevd_helper
_tensor_classes
_tensor_str
_test_serialization_subcmul
_trilinear
_unique
_unique2
_unpack_dual
_use_cudnn_ctc_loss
_use_cudnn_rnn_flatten_weight
_utils
_utils_internal
_validate_sparse_coo_tensor_args
_var
_vmap_internals
_weight_norm
_weight_norm_cuda_interface
abs
abs_
absolute
acos
acos_
acosh
acosh_
adaptive_avg_pool1d
adaptive_max_pool1d
add
addbmm
addcdiv
addcmul
addmm
addmv
addmv_
addr
affine_grid_generator
align_tensors
all
allclose
alpha_dropout
alpha_dropout_
amax
amin
angle
any
arange
arccos
arccos_
arccosh
arccosh_
arcsin
arcsin_
arcsinh
arcsinh_
arctan
arctan_
arctanh
arctanh_
are_deterministic_algorithms_enabled
argmax
argmin
argsort
as_strided
as_strided_
as_tensor
asin
asin_
asinh
asinh_
atan
atan2
atan_
atanh
atanh_
atleast_1d
atleast_2d
atleast_3d
autocast_decrement_nesting
autocast_increment_nesting
autograd
avg_pool1d
backends
baddbmm
bartlett_window
base_py_dll_path
batch_norm
batch_norm_backward_elemt
batch_norm_backward_reduce
batch_norm_elemt
batch_norm_gather_stats
batch_norm_gather_stats_with_counts
batch_norm_stats
batch_norm_update_stats
bernoulli
bfloat16
bilinear
binary_cross_entropy_with_logits
bincount
binomial
bitwise_and
bitwise_not
bitwise_or
bitwise_xor
blackman_window
block_diag
bmm
bool
broadcast_shapes
broadcast_tensors
broadcast_to
bucketize
can_cast
cartesian_prod
cat
cdist
cdouble
ceil
ceil_
celu
celu_
cfloat
chain_matmul
channel_shuffle
channels_last
channels_last_3d
cholesky
cholesky_inverse
cholesky_solve
choose_qparams_optimized
chunk
clamp
clamp_
clamp_max
clamp_max_
clamp_min
clamp_min_
classes
clear_autocast_cache
clip
clip_
clone
column_stack
combinations
compiled_with_cxx11_abi
complex
complex128
complex32
complex64
conj
constant_pad_nd
contiguous_format
conv1d
conv2d
conv3d
conv_tbc
conv_transpose1d
conv_transpose2d
conv_transpose3d
convolution
copysign
cos
cos_
cosh
cosh_
cosine_embedding_loss
cosine_similarity
count_nonzero
cpp
cross
ctc_loss
ctypes
cuda
cuda_path
cuda_version
cudnn_affine_grid_generator
cudnn_batch_norm
cudnn_convolution
cudnn_convolution_transpose
cudnn_grid_sampler
cudnn_is_acceptable
cummax
cummin
cumprod
cumsum
default_generator
deg2rad
deg2rad_
dequantize
det
detach
detach_
device
diag
diag_embed
diagflat
diagonal
diff
digamma
dist
distributed
distributions
div
divide
dll
dll_path
dll_paths
dlls
dot
double
dropout
dropout_
dsmm
dstack
dtype
eig
einsum
embedding
embedding_bag
embedding_renorm_
empty
empty_like
empty_meta
empty_quantized
empty_strided
enable_grad
eq
equal
erf
erf_
erfc
erfc_
erfinv
exp
exp2
exp2_
exp_
expm1
expm1_
eye
fake_quantize_per_channel_affine
fake_quantize_per_tensor_affine
fbgemm_linear_fp16_weight
fbgemm_linear_fp16_weight_fp32_activation
fbgemm_linear_int8_weight
fbgemm_linear_int8_weight_fp32_activation
fbgemm_linear_quantize_weight
fbgemm_pack_gemm_matrix_fp16
fbgemm_pack_quantized_matrix
feature_alpha_dropout
feature_alpha_dropout_
feature_dropout
feature_dropout_
fft
fill_
finfo
fix
fix_
flatten
flip
fliplr
flipud
float
float16
float32
float64
float_power
floor
floor_
floor_divide
fmax
fmin
fmod
fork
frac
frac_
frobenius_norm
from_file
from_numpy
full
full_like
functional
futures
gather
gcd
gcd_
ge
geqrf
ger
get_default_dtype
get_device
get_file_path
get_num_interop_threads
get_num_threads
get_rng_state
glob
greater
greater_equal
grid_sampler
grid_sampler_2d
grid_sampler_3d
group_norm
gru
gru_cell
gt
half
hamming_window
hann_window
hardshrink
has_cuda
has_cudnn
has_lapack
has_mkl
has_mkldnn
has_openmp
heaviside
hinge_embedding_loss
histc
hsmm
hspmm
hstack
hub
hypot
i0
i0_
igamma
igammac
iinfo
imag
import_ir_module
import_ir_module_from_buffer
index_add
index_copy
index_fill
index_put
index_put_
index_select
init_num_threads
initial_seed
inner
instance_norm
int
int16
int32
int64
int8
int_repr
inverse
is_anomaly_enabled
is_autocast_enabled
is_complex
is_deterministic
is_distributed
is_floating_point
is_grad_enabled
is_loaded
is_nonzero
is_same_size
is_signed
is_storage
is_tensor
is_vulkan_available
isclose
isfinite
isinf
isnan
isneginf
isposinf
isreal
istft
jit
kaiser_window
kernel32
kl_div
kron
kthvalue
last_error
layer_norm
layout
lcm
lcm_
ldexp
ldexp_
le
legacy_contiguous_format
lerp
less
less_equal
lgamma
linalg
linspace
load
lobpcg
log
log10
log10_
log1p
log1p_
log2
log2_
log_
log_softmax
logaddexp
logaddexp2
logcumsumexp
logdet
logical_and
logical_not
logical_or
logical_xor
logit
logit_
logspace
logsumexp
long
lstm
lstm_cell
lstsq
lt
lu
lu_solve
lu_unpack
manual_seed
margin_ranking_loss
masked_fill
masked_scatter
masked_select
matmul
matrix_exp
matrix_power
matrix_rank
max
max_pool1d
max_pool1d_with_indices
max_pool2d
max_pool3d
maximum
mean
median
memory_format
merge_type_from_type_comment
meshgrid
min
minimum
miopen_batch_norm
miopen_convolution
miopen_convolution_transpose
miopen_depthwise_convolution
miopen_rnn
mkldnn_adaptive_avg_pool2d
mkldnn_convolution
mkldnn_convolution_backward_weights
mkldnn_linear_backward_weights
mkldnn_max_pool2d
mkldnn_max_pool3d
mm
mode
moveaxis
movedim
msort
mul
multinomial
multiply
multiprocessing
mv
mvlgamma
name
nan_to_num
nan_to_num_
nanmedian
nanquantile
nansum
narrow
narrow_copy
native_batch_norm
native_group_norm
native_layer_norm
native_norm
ne
neg
neg_
negative
negative_
nextafter
nn
no_grad
nonzero
norm
norm_except_dim
normal
not_equal
nuclear_norm
numel
nvtoolsext_dll_path
ones
ones_like
onnx
ops
optim
orgqr
ormqr
os
outer
overrides
pairwise_distance
parse_ir
parse_schema
parse_type_comment
path_patched
pca_lowrank
pdist
per_channel_affine
per_channel_affine_float_qparams
per_channel_symmetric
per_tensor_affine
per_tensor_symmetric
pfiles_path
pinverse
pixel_shuffle
pixel_unshuffle
platform
poisson
poisson_nll_loss
polar
polygamma
pow
prelu
prepare_multiprocessing_environment
preserve_format
prev_error_mode
prod
profiler
promote_types
py_dll_path
q_per_channel_axis
q_per_channel_scales
q_per_channel_zero_points
q_scale
q_zero_point
qint32
qint8
qr
qscheme
quantile
quantization
quantize_per_channel
quantize_per_tensor
quantized_batch_norm
quantized_gru
quantized_gru_cell
quantized_lstm
quantized_lstm_cell
quantized_max_pool1d
quantized_max_pool2d
quantized_rnn_relu_cell
quantized_rnn_tanh_cell
quasirandom
quint4x2
quint8
rad2deg
rad2deg_
rand
rand_like
randint
randint_like
randn
randn_like
random
randperm
range
ravel
real
reciprocal
reciprocal_
relu
relu_
remainder
renorm
repeat_interleave
res
reshape
resize_as_
result_type
rnn_relu
rnn_relu_cell
rnn_tanh
rnn_tanh_cell
roll
rot90
round
round_
row_stack
rrelu
rrelu_
rsqrt
rsqrt_
rsub
saddmm
save
scalar_tensor
scatter
scatter_add
searchsorted
seed
select
selu
selu_
serialization
set_anomaly_enabled
set_autocast_enabled
set_default_dtype
set_default_tensor_type
set_deterministic
set_flush_denormal
set_grad_enabled
set_num_interop_threads
set_num_threads
set_printoptions
set_rng_state
sgn
short
sigmoid
sigmoid_
sign
signbit
sin
sin_
sinc
sinc_
sinh
sinh_
slogdet
smm
softmax
solve
sort
sparse
sparse_coo
sparse_coo_tensor
split
split_with_sizes
spmm
sqrt
sqrt_
square
square_
squeeze
sspaddmm
stack
std
std_mean
stft
storage
strided
sub
subtract
sum
svd
svd_lowrank
swapaxes
swapdims
symeig
sys
t
take
tan
tan_
tanh
tanh_
tensor
tensor_split
tensordot
testing
textwrap
th_dll_path
threshold
threshold_
tile
topk
torch
trace
transpose
trapz
triangular_solve
tril
tril_indices
triplet_margin_loss
triu
triu_indices
true_divide
trunc
trunc_
typename
types
uint8
unbind
unify_type_list
unique
unique_consecutive
unsafe_chunk
unsafe_split
unsafe_split_with_sizes
unsqueeze
use_deterministic_algorithms
utils
vander
var
var_mean
vdot
version
view_as_complex
view_as_real
vstack
wait
warnings
where
with_load_library_flags
xlogy
xlogy_
zero_
zeros
zeros_like

要查看 PyTorch 中特定函数的用法,可以使用 help() 函数,如下所示:


import torch
# 查看torch中add函数的用法
help(torch.add)

这将打印出 torch.add 函数的帮助文档,包括函数的参数、返回值、用法示例等。您也可以在 Jupyter Notebook 或 IPython 中使用 ? 符号来查看帮助文档,如下所示:

import torch
# 查看torch中add函数的用法
torch.add?

这将显示与上面 help(torch.add) 相同的帮助文档。


Output exceeds the size limit. Open the full output data in a text editor
Docstring:
add(input, other, *, out=None)
Adds the scalar :attr:`other` to each element of the input :attr:`input`
and returns a new resulting tensor.
.. math::
    \text{out} = \text{input} + \text{other}
If :attr:`input` is of type FloatTensor or DoubleTensor, :attr:`other` must be
a real number, otherwise it should be an integer.
Args:
    input (Tensor): the input tensor.
    value (Number): the number to be added to each element of :attr:`input`
Keyword arguments:
    out (Tensor, optional): the output tensor.
Example::
    >>> a = torch.randn(4)
    >>> a
    tensor([ 0.0202,  1.0985,  1.3506, -0.6056])
    >>> torch.add(a, 20)
...
            [-18.6971, -18.0736, -17.0994, -17.3216],
            [ -6.7845,  -6.1610,  -5.1868,  -5.4090],
            [ -8.9902,  -8.3667,  -7.3925,  -7.6147]])
Type:      builtin_function_or_method
相关文章
|
7月前
|
机器学习/深度学习 PyTorch 算法框架/工具
torch.nn.Linear的使用方法
torch.nn.Linear的使用方法
201 0
|
机器学习/深度学习 PyTorch 算法框架/工具
RGCN的torch简单案例
RGCN 是指 Relational Graph Convolutional Network,是一种基于图卷积神经网络(GCN)的模型。与传统的 GCN 不同的是,RGCN 可以处理具有多种关系(边)类型的图数据,从而更好地模拟现实世界中的实体和它们之间的复杂关系。 RGCN 可以用于多种任务,例如知识图谱推理、社交网络分析、药物发现等。以下是一个以知识图谱推理为例的应用场景: 假设我们有一个知识图谱,其中包含一些实体(如人、物、地点)以及它们之间的关系(如出生于、居住在、工作于)。图谱可以表示为一个二元组 (E, R),其中 E 表示实体的集合,R 表示关系的集合,每个关系 r ∈ R
1380 0
|
PyTorch 算法框架/工具
pytorch中torch.clamp()使用方法
pytorch中torch.clamp()使用方法
599 0
pytorch中torch.clamp()使用方法
|
存储 PyTorch 算法框架/工具
Tensor to img && imge to tensor (pytorch的tensor转换)
Tensor to img && imge to tensor (pytorch的tensor转换)
|
6月前
|
PyTorch 算法框架/工具
【chat-gpt问答记录】torch.tensor和torch.Tensor什么区别?
【chat-gpt问答记录】torch.tensor和torch.Tensor什么区别?
169 2
|
7月前
|
程序员 Python
|
7月前
|
存储 PyTorch 算法框架/工具
torch.Storage()是什么?和torch.Tensor()有什么区别?
torch.Storage()是什么?和torch.Tensor()有什么区别?
59 1
|
PyTorch 算法框架/工具
PyTorch中 nn.Conv2d与nn.ConvTranspose2d函数的用法
PyTorch中 nn.Conv2d与nn.ConvTranspose2d函数的用法
529 2
PyTorch中 nn.Conv2d与nn.ConvTranspose2d函数的用法
|
机器学习/深度学习 PyTorch 算法框架/工具
Pytorch torch.nn库以及nn与nn.functional有什么区别?
Pytorch torch.nn库以及nn与nn.functional有什么区别?
116 0
|
机器学习/深度学习 PyTorch API
Torch
Torch是一个用于构建深度学习模型的开源机器学习库,它基于Lua编程语言。然而,由于PyTorch的出现,现在通常所说的"torch"指的是PyTorch。PyTorch是一个基于Torch的Python库,它提供了一个灵活而高效的深度学习框架。
291 1