Pandas 2.2 中文官方教程和指南(十七)(1)

简介: Pandas 2.2 中文官方教程和指南(十七)

重复标签

原文:pandas.pydata.org/docs/user_guide/duplicates.html

Index对象不需要是唯一的;你可以有重复的行或列标签。这一点可能一开始会有点困惑。如果你熟悉 SQL,你会知道行标签类似于表上的主键,你绝不希望在 SQL 表中有重复项。但 pandas 的一个作用是在数据传输到某个下游系统之前清理混乱的真实世界数据。而真实世界的数据中有重复项,即使在应该是唯一的字段中也是如此。

本节描述了重复标签如何改变某些操作的行为,以及如何在操作过程中防止重复项的出现,或者在出现重复项时如何检测它们。

In [1]: import pandas as pd
In [2]: import numpy as np 

重复标签的后果

一些 pandas 方法(例如Series.reindex())在存在重复项时根本无法工作。输出无法确定,因此 pandas 会引发异常。

In [3]: s1 = pd.Series([0, 1, 2], index=["a", "b", "b"])
In [4]: s1.reindex(["a", "b", "c"])
---------------------------------------------------------------------------
ValueError  Traceback (most recent call last)
Cell In[4], line 1
----> 1 s1.reindex(["a", "b", "c"])
File ~/work/pandas/pandas/pandas/core/series.py:5153, in Series.reindex(self, index, axis, method, copy, level, fill_value, limit, tolerance)
  5136 @doc(
  5137     NDFrame.reindex,  # type: ignore[has-type]
  5138     klass=_shared_doc_kwargs["klass"],
   (...)
  5151     tolerance=None,
  5152 ) -> Series:
-> 5153     return super().reindex(
  5154         index=index,
  5155         method=method,
  5156         copy=copy,
  5157         level=level,
  5158         fill_value=fill_value,
  5159         limit=limit,
  5160         tolerance=tolerance,
  5161     )
File ~/work/pandas/pandas/pandas/core/generic.py:5610, in NDFrame.reindex(self, labels, index, columns, axis, method, copy, level, fill_value, limit, tolerance)
  5607     return self._reindex_multi(axes, copy, fill_value)
  5609 # perform the reindex on the axes
-> 5610 return self._reindex_axes(
  5611     axes, level, limit, tolerance, method, fill_value, copy
  5612 ).__finalize__(self, method="reindex")
File ~/work/pandas/pandas/pandas/core/generic.py:5633, in NDFrame._reindex_axes(self, axes, level, limit, tolerance, method, fill_value, copy)
  5630     continue
  5632 ax = self._get_axis(a)
-> 5633 new_index, indexer = ax.reindex(
  5634     labels, level=level, limit=limit, tolerance=tolerance, method=method
  5635 )
  5637 axis = self._get_axis_number(a)
  5638 obj = obj._reindex_with_indexers(
  5639     {axis: [new_index, indexer]},
  5640     fill_value=fill_value,
  5641     copy=copy,
  5642     allow_dups=False,
  5643 )
File ~/work/pandas/pandas/pandas/core/indexes/base.py:4429, in Index.reindex(self, target, method, level, limit, tolerance)
  4426     raise ValueError("cannot handle a non-unique multi-index!")
  4427 elif not self.is_unique:
  4428     # GH#42568
-> 4429     raise ValueError("cannot reindex on an axis with duplicate labels")
  4430 else:
  4431     indexer, _ = self.get_indexer_non_unique(target)
ValueError: cannot reindex on an axis with duplicate labels 

其他方法,如索引,可能会产生非常令人惊讶的结果。通常使用标量进行索引会降低维度。使用标量切片DataFrame将返回一个Series。使用标量切片Series将返回一个标量。但是对于重复项,情况并非如此。

In [5]: df1 = pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "A", "B"])
In [6]: df1
Out[6]: 
 A  A  B
0  0  1  2
1  3  4  5 

我们的列中有重复项。如果我们切片'B',我们会得到一个Series

In [7]: df1["B"]  # a series
Out[7]: 
0    2
1    5
Name: B, dtype: int64 

但是切片'A'返回一个DataFrame

In [8]: df1["A"]  # a DataFrame
Out[8]: 
 A  A
0  0  1
1  3  4 

这也适用于行标签

In [9]: df2 = pd.DataFrame({"A": [0, 1, 2]}, index=["a", "a", "b"])
In [10]: df2
Out[10]: 
 A
a  0
a  1
b  2
In [11]: df2.loc["b", "A"]  # a scalar
Out[11]: 2
In [12]: df2.loc["a", "A"]  # a Series
Out[12]: 
a    0
a    1
Name: A, dtype: int64 

重复标签检测

您可以使用Index.is_unique检查Index(存储行或列标签)是否唯一:

In [13]: df2
Out[13]: 
 A
a  0
a  1
b  2
In [14]: df2.index.is_unique
Out[14]: False
In [15]: df2.columns.is_unique
Out[15]: True 

注意

检查索引是否唯一对于大型数据集来说有点昂贵。pandas 会缓存此结果,因此在相同的索引上重新检查非常快。

Index.duplicated()将返回一个布尔数组,指示标签是否重复。

In [16]: df2.index.duplicated()
Out[16]: array([False,  True, False]) 

可以用作布尔过滤器来删除重复行。

In [17]: df2.loc[~df2.index.duplicated(), :]
Out[17]: 
 A
a  0
b  2 

如果您需要额外的逻辑来处理重复标签,而不仅仅是删除重复项,则在索引上使用groupby()是一个常见的技巧。例如,我们将通过取具有相同标签的所有行的平均值来解决重复项。

In [18]: df2.groupby(level=0).mean()
Out[18]: 
 A
a  0.5
b  2.0 

禁止重复标签

版本 1.2.0 中的新功能。

如上所述,在读取原始数据时处理重复项是一个重要的功能。也就是说,您可能希望避免在数据处理管道中引入重复项(从方法如pandas.concat()rename()等)。SeriesDataFrame通过调用.set_flags(allows_duplicate_labels=False)禁止重复标签(默认情况下允许)。如果存在重复标签,将引发异常。

In [19]: pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False)
---------------------------------------------------------------------------
DuplicateLabelError  Traceback (most recent call last)
Cell In[19], line 1
----> 1 pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False)
File ~/work/pandas/pandas/pandas/core/generic.py:508, in NDFrame.set_flags(self, copy, allows_duplicate_labels)
  506 df = self.copy(deep=copy and not using_copy_on_write())
  507 if allows_duplicate_labels is not None:
--> 508     df.flags["allows_duplicate_labels"] = allows_duplicate_labels
  509 return df
File ~/work/pandas/pandas/pandas/core/flags.py:109, in Flags.__setitem__(self, key, value)
  107 if key not in self._keys:
  108     raise ValueError(f"Unknown flag {key}. Must be one of {self._keys}")
--> 109 setattr(self, key, value)
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
  94 if not value:
  95     for ax in obj.axes:
---> 96         ax._maybe_check_unique()
  98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
  712 duplicates = self._format_duplicate_message()
  713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
      positions
label          
b        [1, 2] 

这适用于DataFrame的行和列标签

In [20]: pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "B", "C"],).set_flags(
 ....:    allows_duplicate_labels=False
 ....: )
 ....: 
Out[20]: 
 A  B  C
0  0  1  2
1  3  4  5 

可以使用allows_duplicate_labels来检查或设置此属性,该属性指示该对象是否可以具有重复标签。

In [21]: df = pd.DataFrame({"A": [0, 1, 2, 3]}, index=["x", "y", "X", "Y"]).set_flags(
 ....:    allows_duplicate_labels=False
 ....: )
 ....: 
In [22]: df
Out[22]: 
 A
x  0
y  1
X  2
Y  3
In [23]: df.flags.allows_duplicate_labels
Out[23]: False 

DataFrame.set_flags()可用于返回一个新的DataFrame,其中包含allows_duplicate_labels等属性设置为某个值

In [24]: df2 = df.set_flags(allows_duplicate_labels=True)
In [25]: df2.flags.allows_duplicate_labels
Out[25]: True 

返回的新DataFrame是对旧DataFrame上相同数据的视图。或者该属性可以直接设置在同一对象上。

In [26]: df2.flags.allows_duplicate_labels = False
In [27]: df2.flags.allows_duplicate_labels
Out[27]: False 

在处理原始杂乱数据时,您可能首先会读取杂乱数据(其中可能存在重复标签),然后去重,并且在之后禁止重复,以确保您的数据流水线不会引入重复。

>>> raw = pd.read_csv("...")
>>> deduplicated = raw.groupby(level=0).first()  # remove duplicates
>>> deduplicated.flags.allows_duplicate_labels = False  # disallow going forward 

在具有重复标签的SeriesDataFrame上设置allows_duplicate_labels=False,或执行引入重复标签的操作,会导致引发errors.DuplicateLabelError

In [28]: df.rename(str.upper)
---------------------------------------------------------------------------
DuplicateLabelError  Traceback (most recent call last)
Cell In[28], line 1
----> 1 df.rename(str.upper)
File ~/work/pandas/pandas/pandas/core/frame.py:5767, in DataFrame.rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
  5636 def rename(
  5637     self,
  5638     mapper: Renamer | None = None,
   (...)
  5646     errors: IgnoreRaise = "ignore",
  5647 ) -> DataFrame | None:
  5648  """
  5649 Rename columns or index labels.
  5650  
 (...)
  5765 4  3  6
  5766 """
-> 5767     return super()._rename(
  5768         mapper=mapper,
  5769         index=index,
  5770         columns=columns,
  5771         axis=axis,
  5772         copy=copy,
  5773         inplace=inplace,
  5774         level=level,
  5775         errors=errors,
  5776     )
File ~/work/pandas/pandas/pandas/core/generic.py:1140, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
  1138     return None
  1139 else:
-> 1140     return result.__finalize__(self, method="rename")
File ~/work/pandas/pandas/pandas/core/generic.py:6262, in NDFrame.__finalize__(self, other, method, **kwargs)
  6255 if other.attrs:
  6256     # We want attrs propagation to have minimal performance
  6257     # impact if attrs are not used; i.e. attrs is an empty dict.
  6258     # One could make the deepcopy unconditionally, but a deepcopy
  6259     # of an empty dict is 50x more expensive than the empty check.
  6260     self.attrs = deepcopy(other.attrs)
-> 6262 self.flags.allows_duplicate_labels = other.flags.allows_duplicate_labels
  6263 # For subclasses using _metadata.
  6264 for name in set(self._metadata) & set(other._metadata):
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
  94 if not value:
  95     for ax in obj.axes:
---> 96         ax._maybe_check_unique()
  98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
  712 duplicates = self._format_duplicate_message()
  713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
      positions
label          
X        [0, 2]
Y        [1, 3] 

此错误消息包含重复的标签,以及SeriesDataFrame中所有重复项(包括“原始”)的数字位置

重复标签传播

一般来说,不允许重复是“粘性的”。它会通过操作保留下来。

In [29]: s1 = pd.Series(0, index=["a", "b"]).set_flags(allows_duplicate_labels=False)
In [30]: s1
Out[30]: 
a    0
b    0
dtype: int64
In [31]: s1.head().rename({"a": "b"})
---------------------------------------------------------------------------
DuplicateLabelError  Traceback (most recent call last)
Cell In[31], line 1
----> 1 s1.head().rename({"a": "b"})
File ~/work/pandas/pandas/pandas/core/series.py:5090, in Series.rename(self, index, axis, copy, inplace, level, errors)
  5083     axis = self._get_axis_number(axis)
  5085 if callable(index) or is_dict_like(index):
  5086     # error: Argument 1 to "_rename" of "NDFrame" has incompatible
  5087     # type "Union[Union[Mapping[Any, Hashable], Callable[[Any],
  5088     # Hashable]], Hashable, None]"; expected "Union[Mapping[Any,
  5089     # Hashable], Callable[[Any], Hashable], None]"
-> 5090     return super()._rename(
  5091         index,  # type: ignore[arg-type]
  5092         copy=copy,
  5093         inplace=inplace,
  5094         level=level,
  5095         errors=errors,
  5096     )
  5097 else:
  5098     return self._set_name(index, inplace=inplace, deep=copy)
File ~/work/pandas/pandas/pandas/core/generic.py:1140, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
  1138     return None
  1139 else:
-> 1140     return result.__finalize__(self, method="rename")
File ~/work/pandas/pandas/pandas/core/generic.py:6262, in NDFrame.__finalize__(self, other, method, **kwargs)
  6255 if other.attrs:
  6256     # We want attrs propagation to have minimal performance
  6257     # impact if attrs are not used; i.e. attrs is an empty dict.
  6258     # One could make the deepcopy unconditionally, but a deepcopy
  6259     # of an empty dict is 50x more expensive than the empty check.
  6260     self.attrs = deepcopy(other.attrs)
-> 6262 self.flags.allows_duplicate_labels = other.flags.allows_duplicate_labels
  6263 # For subclasses using _metadata.
  6264 for name in set(self._metadata) & set(other._metadata):
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
  94 if not value:
  95     for ax in obj.axes:
---> 96         ax._maybe_check_unique()
  98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
  712 duplicates = self._format_duplicate_message()
  713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
      positions
label          
b        [0, 1] 

警告

这是一个实验性功能。目前,许多方法未能传播allows_duplicate_labels的值。未来版本预计每个接受或返回一个或多个 DataFrame 或 Series 对象的方法都将传播allows_duplicate_labels

重复标签的后果

一些 pandas 方法(例如Series.reindex())在存在重复时无法正常工作。输出结果无法确定,因此 pandas 会报错。

In [3]: s1 = pd.Series([0, 1, 2], index=["a", "b", "b"])
In [4]: s1.reindex(["a", "b", "c"])
---------------------------------------------------------------------------
ValueError  Traceback (most recent call last)
Cell In[4], line 1
----> 1 s1.reindex(["a", "b", "c"])
File ~/work/pandas/pandas/pandas/core/series.py:5153, in Series.reindex(self, index, axis, method, copy, level, fill_value, limit, tolerance)
  5136 @doc(
  5137     NDFrame.reindex,  # type: ignore[has-type]
  5138     klass=_shared_doc_kwargs["klass"],
   (...)
  5151     tolerance=None,
  5152 ) -> Series:
-> 5153     return super().reindex(
  5154         index=index,
  5155         method=method,
  5156         copy=copy,
  5157         level=level,
  5158         fill_value=fill_value,
  5159         limit=limit,
  5160         tolerance=tolerance,
  5161     )
File ~/work/pandas/pandas/pandas/core/generic.py:5610, in NDFrame.reindex(self, labels, index, columns, axis, method, copy, level, fill_value, limit, tolerance)
  5607     return self._reindex_multi(axes, copy, fill_value)
  5609 # perform the reindex on the axes
-> 5610 return self._reindex_axes(
  5611     axes, level, limit, tolerance, method, fill_value, copy
  5612 ).__finalize__(self, method="reindex")
File ~/work/pandas/pandas/pandas/core/generic.py:5633, in NDFrame._reindex_axes(self, axes, level, limit, tolerance, method, fill_value, copy)
  5630     continue
  5632 ax = self._get_axis(a)
-> 5633 new_index, indexer = ax.reindex(
  5634     labels, level=level, limit=limit, tolerance=tolerance, method=method
  5635 )
  5637 axis = self._get_axis_number(a)
  5638 obj = obj._reindex_with_indexers(
  5639     {axis: [new_index, indexer]},
  5640     fill_value=fill_value,
  5641     copy=copy,
  5642     allow_dups=False,
  5643 )
File ~/work/pandas/pandas/pandas/core/indexes/base.py:4429, in Index.reindex(self, target, method, level, limit, tolerance)
  4426     raise ValueError("cannot handle a non-unique multi-index!")
  4427 elif not self.is_unique:
  4428     # GH#42568
-> 4429     raise ValueError("cannot reindex on an axis with duplicate labels")
  4430 else:
  4431     indexer, _ = self.get_indexer_non_unique(target)
ValueError: cannot reindex on an axis with duplicate labels 

其他方法,如索引,可能会产生非常奇怪的结果。通常使用标量进行索引将减少维度。使用标量对DataFrame进行切片将返回一个Series。使用标量对Series进行切片将返回一个标量。但是对于重复项,情况并非如此。

In [5]: df1 = pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "A", "B"])
In [6]: df1
Out[6]: 
 A  A  B
0  0  1  2
1  3  4  5 

我们在列中有重复。如果我们切片'B',我们会得到一个Series

In [7]: df1["B"]  # a series
Out[7]: 
0    2
1    5
Name: B, dtype: int64 

但是切片'A'会返回一个DataFrame

In [8]: df1["A"]  # a DataFrame
Out[8]: 
 A  A
0  0  1
1  3  4 

这也适用于行标签

In [9]: df2 = pd.DataFrame({"A": [0, 1, 2]}, index=["a", "a", "b"])
In [10]: df2
Out[10]: 
 A
a  0
a  1
b  2
In [11]: df2.loc["b", "A"]  # a scalar
Out[11]: 2
In [12]: df2.loc["a", "A"]  # a Series
Out[12]: 
a    0
a    1
Name: A, dtype: int64 

重复标签检测

您可以使用Index.is_unique检查Index(存储行或列标签)是否唯一:

In [13]: df2
Out[13]: 
 A
a  0
a  1
b  2
In [14]: df2.index.is_unique
Out[14]: False
In [15]: df2.columns.is_unique
Out[15]: True 

注意

检查索引是否唯一对于大型数据集来说是比较昂贵的。pandas 会缓存此结果,因此在相同的索引上重新检查非常快。

Index.duplicated()会返回一个布尔型 ndarray,指示标签是否重复。

In [16]: df2.index.duplicated()
Out[16]: array([False,  True, False]) 

可以将其用作布尔过滤器以删除重复行。

In [17]: df2.loc[~df2.index.duplicated(), :]
Out[17]: 
 A
a  0
b  2 

如果您需要额外的逻辑来处理重复标签,而不仅仅是删除重复项,则在索引上使用groupby()是一种常见的技巧。例如,我们将通过取具有相同标签的所有行的平均值来解决重复项。

In [18]: df2.groupby(level=0).mean()
Out[18]: 
 A
a  0.5
b  2.0 

不允许重复标签

新版本 1.2.0 中新增。

如上所述,在读取原始数据时处理重复是一个重要功能。也就是说,您可能希望避免在数据处理流水线中引入重复(从方法如pandas.concat()rename()等)。通过调用.set_flags(allows_duplicate_labels=False)SeriesDataFrame不允许重复标签(默认允许)。如果存在重复标签,将引发异常。

In [19]: pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False)
---------------------------------------------------------------------------
DuplicateLabelError  Traceback (most recent call last)
Cell In[19], line 1
----> 1 pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False)
File ~/work/pandas/pandas/pandas/core/generic.py:508, in NDFrame.set_flags(self, copy, allows_duplicate_labels)
  506 df = self.copy(deep=copy and not using_copy_on_write())
  507 if allows_duplicate_labels is not None:
--> 508     df.flags["allows_duplicate_labels"] = allows_duplicate_labels
  509 return df
File ~/work/pandas/pandas/pandas/core/flags.py:109, in Flags.__setitem__(self, key, value)
  107 if key not in self._keys:
  108     raise ValueError(f"Unknown flag {key}. Must be one of {self._keys}")
--> 109 setattr(self, key, value)
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
  94 if not value:
  95     for ax in obj.axes:
---> 96         ax._maybe_check_unique()
  98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
  712 duplicates = self._format_duplicate_message()
  713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
      positions
label          
b        [1, 2] 

这适用于DataFrame的行标签和列标签。

In [20]: pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "B", "C"],).set_flags(
 ....:    allows_duplicate_labels=False
 ....: )
 ....: 
Out[20]: 
 A  B  C
0  0  1  2
1  3  4  5 

可以使用allows_duplicate_labels来检查或设置此属性,该属性指示该对象是否可以具有重复标签。

In [21]: df = pd.DataFrame({"A": [0, 1, 2, 3]}, index=["x", "y", "X", "Y"]).set_flags(
 ....:    allows_duplicate_labels=False
 ....: )
 ....: 
In [22]: df
Out[22]: 
 A
x  0
y  1
X  2
Y  3
In [23]: df.flags.allows_duplicate_labels
Out[23]: False 

DataFrame.set_flags()可用于返回一个新的DataFrame,其中属性如allows_duplicate_labels设置为某个值。

In [24]: df2 = df.set_flags(allows_duplicate_labels=True)
In [25]: df2.flags.allows_duplicate_labels
Out[25]: True 

返回的新DataFrame是与旧DataFrame相同数据的视图。或者该属性可以直接设置在同一对象上。

In [26]: df2.flags.allows_duplicate_labels = False
In [27]: df2.flags.allows_duplicate_labels
Out[27]: False 

在处理原始混乱数据时,您可能首先读取混乱数据(可能具有重复标签),去重,然后禁止未来出现重复,以确保您的数据流水线不会引入重复。

>>> raw = pd.read_csv("...")
>>> deduplicated = raw.groupby(level=0).first()  # remove duplicates
>>> deduplicated.flags.allows_duplicate_labels = False  # disallow going forward 

设置allows_duplicate_labels=False在具有重复标签的SeriesDataFrame上,或者在SeriesDataFrame上执行引入重复标签的操作,而该SeriesDataFrame不允许重复标签时,将引发errors.DuplicateLabelError

In [28]: df.rename(str.upper)
---------------------------------------------------------------------------
DuplicateLabelError  Traceback (most recent call last)
Cell In[28], line 1
----> 1 df.rename(str.upper)
File ~/work/pandas/pandas/pandas/core/frame.py:5767, in DataFrame.rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
  5636 def rename(
  5637     self,
  5638     mapper: Renamer | None = None,
   (...)
  5646     errors: IgnoreRaise = "ignore",
  5647 ) -> DataFrame | None:
  5648  """
  5649 Rename columns or index labels.
  5650  
 (...)
  5765 4  3  6
  5766 """
-> 5767     return super()._rename(
  5768         mapper=mapper,
  5769         index=index,
  5770         columns=columns,
  5771         axis=axis,
  5772         copy=copy,
  5773         inplace=inplace,
  5774         level=level,
  5775         errors=errors,
  5776     )
File ~/work/pandas/pandas/pandas/core/generic.py:1140, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
  1138     return None
  1139 else:
-> 1140     return result.__finalize__(self, method="rename")
File ~/work/pandas/pandas/pandas/core/generic.py:6262, in NDFrame.__finalize__(self, other, method, **kwargs)
  6255 if other.attrs:
  6256     # We want attrs propagation to have minimal performance
  6257     # impact if attrs are not used; i.e. attrs is an empty dict.
  6258     # One could make the deepcopy unconditionally, but a deepcopy
  6259     # of an empty dict is 50x more expensive than the empty check.
  6260     self.attrs = deepcopy(other.attrs)
-> 6262 self.flags.allows_duplicate_labels = other.flags.allows_duplicate_labels
  6263 # For subclasses using _metadata.
  6264 for name in set(self._metadata) & set(other._metadata):
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
  94 if not value:
  95     for ax in obj.axes:
---> 96         ax._maybe_check_unique()
  98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
  712 duplicates = self._format_duplicate_message()
  713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
      positions
label          
X        [0, 2]
Y        [1, 3] 

此错误消息包含重复的标签以及所有重复项(包括“原始”)在SeriesDataFrame中的数值位置。

重复标签传播

一般来说,禁止重复是“粘性”的。它会通过操作保留下来。

In [29]: s1 = pd.Series(0, index=["a", "b"]).set_flags(allows_duplicate_labels=False)
In [30]: s1
Out[30]: 
a    0
b    0
dtype: int64
In [31]: s1.head().rename({"a": "b"})
---------------------------------------------------------------------------
DuplicateLabelError  Traceback (most recent call last)
Cell In[31], line 1
----> 1 s1.head().rename({"a": "b"})
File ~/work/pandas/pandas/pandas/core/series.py:5090, in Series.rename(self, index, axis, copy, inplace, level, errors)
  5083     axis = self._get_axis_number(axis)
  5085 if callable(index) or is_dict_like(index):
  5086     # error: Argument 1 to "_rename" of "NDFrame" has incompatible
  5087     # type "Union[Union[Mapping[Any, Hashable], Callable[[Any],
  5088     # Hashable]], Hashable, None]"; expected "Union[Mapping[Any,
  5089     # Hashable], Callable[[Any], Hashable], None]"
-> 5090     return super()._rename(
  5091         index,  # type: ignore[arg-type]
  5092         copy=copy,
  5093         inplace=inplace,
  5094         level=level,
  5095         errors=errors,
  5096     )
  5097 else:
  5098     return self._set_name(index, inplace=inplace, deep=copy)
File ~/work/pandas/pandas/pandas/core/generic.py:1140, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
  1138     return None
  1139 else:
-> 1140     return result.__finalize__(self, method="rename")
File ~/work/pandas/pandas/pandas/core/generic.py:6262, in NDFrame.__finalize__(self, other, method, **kwargs)
  6255 if other.attrs:
  6256     # We want attrs propagation to have minimal performance
  6257     # impact if attrs are not used; i.e. attrs is an empty dict.
  6258     # One could make the deepcopy unconditionally, but a deepcopy
  6259     # of an empty dict is 50x more expensive than the empty check.
  6260     self.attrs = deepcopy(other.attrs)
-> 6262 self.flags.allows_duplicate_labels = other.flags.allows_duplicate_labels
  6263 # For subclasses using _metadata.
  6264 for name in set(self._metadata) & set(other._metadata):
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
  94 if not value:
  95     for ax in obj.axes:
---> 96         ax._maybe_check_unique()
  98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
  712 duplicates = self._format_duplicate_message()
  713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
      positions
label          
b        [0, 1] 

警告

这是一个实验性功能。目前,许多方法未能传播allows_duplicate_labels值。在未来版本中,预计每个接受或返回一个或多个 DataFrame 或 Series 对象的方法将传播allows_duplicate_labels

重复标签传播

一般来说,禁止重复是“粘性”的。它会通过操作保留下来。

In [29]: s1 = pd.Series(0, index=["a", "b"]).set_flags(allows_duplicate_labels=False)
In [30]: s1
Out[30]: 
a    0
b    0
dtype: int64
In [31]: s1.head().rename({"a": "b"})
---------------------------------------------------------------------------
DuplicateLabelError  Traceback (most recent call last)
Cell In[31], line 1
----> 1 s1.head().rename({"a": "b"})
File ~/work/pandas/pandas/pandas/core/series.py:5090, in Series.rename(self, index, axis, copy, inplace, level, errors)
  5083     axis = self._get_axis_number(axis)
  5085 if callable(index) or is_dict_like(index):
  5086     # error: Argument 1 to "_rename" of "NDFrame" has incompatible
  5087     # type "Union[Union[Mapping[Any, Hashable], Callable[[Any],
  5088     # Hashable]], Hashable, None]"; expected "Union[Mapping[Any,
  5089     # Hashable], Callable[[Any], Hashable], None]"
-> 5090     return super()._rename(
  5091         index,  # type: ignore[arg-type]
  5092         copy=copy,
  5093         inplace=inplace,
  5094         level=level,
  5095         errors=errors,
  5096     )
  5097 else:
  5098     return self._set_name(index, inplace=inplace, deep=copy)
File ~/work/pandas/pandas/pandas/core/generic.py:1140, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
  1138     return None
  1139 else:
-> 1140     return result.__finalize__(self, method="rename")
File ~/work/pandas/pandas/pandas/core/generic.py:6262, in NDFrame.__finalize__(self, other, method, **kwargs)
  6255 if other.attrs:
  6256     # We want attrs propagation to have minimal performance
  6257     # impact if attrs are not used; i.e. attrs is an empty dict.
  6258     # One could make the deepcopy unconditionally, but a deepcopy
  6259     # of an empty dict is 50x more expensive than the empty check.
  6260     self.attrs = deepcopy(other.attrs)
-> 6262 self.flags.allows_duplicate_labels = other.flags.allows_duplicate_labels
  6263 # For subclasses using _metadata.
  6264 for name in set(self._metadata) & set(other._metadata):
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
  94 if not value:
  95     for ax in obj.axes:
---> 96         ax._maybe_check_unique()
  98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
  712 duplicates = self._format_duplicate_message()
  713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
      positions
label          
b        [0, 1] 

警告

这是一个实验性功能。目前,许多方法未能传播allows_duplicate_labels值。在未来版本中,预计每个接受或返回一个或多个 DataFrame 或 Series 对象的方法将传播allows_duplicate_labels

分类数据

原文:pandas.pydata.org/docs/user_guide/categorical.html

这是关于 pandas 分类数据类型的介绍,包括与 R 的factor的简短比较。

Categoricals是一种与统计学中的分类变量对应的 pandas 数据类型。分类变量只能取有限且通常固定的可能值(categories;在 R 中称为levels)。例如性别、社会阶层、血型、国家隶属、观察时间或通过 Likert 量表进行评分等。

与统计学中的分类变量相反,分类数据可能具有顺序(例如‘强烈同意’与‘同意’或‘第一次观察’与‘第二次观察’),但不支持数值运算(加法、除法等)。

分类数据的所有值都在categoriesnp.nan中。顺序由categories的顺序而不是值的词法顺序定义。在内部,数据结构由一个categories数组和一个指向categories数组中实际值的整数数组codes组成。

分类数据类型在以下情况下很有用:

  • 由仅包含几个不同值的字符串变量组成。将这样的字符串变量转换为分类变量将节省一些内存,参见这里。
  • 变量的词法顺序与逻辑顺序(“one”、“two”、“three”)不同。通过转换为分类变量并在类别上指定顺序,排序和最小/最大值将使用逻辑顺序而不是词法顺序,参见这里。
  • 作为向其他 Python 库发出信号的方式,表明该列应被视为分类变量(例如使用适当的统计方法或绘图类型)。

另请参阅 categoricals 的 API 文档。

对象创建

创建 Series

可以通过几种方式创建SeriesDataFrame中的分类变量:

在构建Series时指定dtype="category"

In [1]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
In [2]: s
Out[2]: 
0    a
1    b
2    c
3    a
dtype: category
Categories (3, object): ['a', 'b', 'c'] 

通过将现有的Series或列转换为category数据类型:

In [3]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})
In [4]: df["B"] = df["A"].astype("category")
In [5]: df
Out[5]: 
 A  B
0  a  a
1  b  b
2  c  c
3  a  a 

通过使用特殊函数,例如cut(),将数据分组为离散的箱。请参阅文档中有关切片的示例。

In [6]: df = pd.DataFrame({"value": np.random.randint(0, 100, 20)})
In [7]: labels = ["{0} - {1}".format(i, i + 9) for i in range(0, 100, 10)]
In [8]: df["group"] = pd.cut(df.value, range(0, 105, 10), right=False, labels=labels)
In [9]: df.head(10)
Out[9]: 
 value    group
0     65  60 - 69
1     49  40 - 49
2     56  50 - 59
3     43  40 - 49
4     43  40 - 49
5     91  90 - 99
6     32  30 - 39
7     87  80 - 89
8     36  30 - 39
9      8    0 - 9 

通过将pandas.Categorical对象传递给Series或将其分配给DataFrame

In [10]: raw_cat = pd.Categorical(
 ....:    ["a", "b", "c", "a"], categories=["b", "c", "d"], ordered=False
 ....: )
 ....: 
In [11]: s = pd.Series(raw_cat)
In [12]: s
Out[12]: 
0    NaN
1      b
2      c
3    NaN
dtype: category
Categories (3, object): ['b', 'c', 'd']
In [13]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})
In [14]: df["B"] = raw_cat
In [15]: df
Out[15]: 
 A    B
0  a  NaN
1  b    b
2  c    c
3  a  NaN 

分类数据具有特定的category dtype:

In [16]: df.dtypes
Out[16]: 
A      object
B    category
dtype: object 

DataFrame 创建

类似于前一节中将单个列转换为分类变量的情况,DataFrame中的所有列都可以在构建期间或构建后批量转换为分类变量。

可以在构建期间通过在DataFrame构造函数中指定dtype="category"来完成此操作:

In [17]: df = pd.DataFrame({"A": list("abca"), "B": list("bccd")}, dtype="category")
In [18]: df.dtypes
Out[18]: 
A    category
B    category
dtype: object 

请注意,每列中存在的类别不同;转换是逐列进行的,因此只有给定列中存在的标签才是类别:

In [19]: df["A"]
Out[19]: 
0    a
1    b
2    c
3    a
Name: A, dtype: category
Categories (3, object): ['a', 'b', 'c']
In [20]: df["B"]
Out[20]: 
0    b
1    c
2    c
3    d
Name: B, dtype: category
Categories (3, object): ['b', 'c', 'd'] 

类似地,可以使用DataFrame.astype()来批量转换现有DataFrame中的所有列:

In [21]: df = pd.DataFrame({"A": list("abca"), "B": list("bccd")})
In [22]: df_cat = df.astype("category")
In [23]: df_cat.dtypes
Out[23]: 
A    category
B    category
dtype: object 

这种转换也是逐列进行的:

In [24]: df_cat["A"]
Out[24]: 
0    a
1    b
2    c
3    a
Name: A, dtype: category
Categories (3, object): ['a', 'b', 'c']
In [25]: df_cat["B"]
Out[25]: 
0    b
1    c
2    c
3    d
Name: B, dtype: category
Categories (3, object): ['b', 'c', 'd'] 

控制行为

在上面的示例中,我们传递了dtype='category',我们使用了默认行为:

  1. 类别是从数据中推断出来的。
  2. 类别是无序的。

要控制这些行为,而不是传递'category',请使用CategoricalDtype的实例。

In [26]: from pandas.api.types import CategoricalDtype
In [27]: s = pd.Series(["a", "b", "c", "a"])
In [28]: cat_type = CategoricalDtype(categories=["b", "c", "d"], ordered=True)
In [29]: s_cat = s.astype(cat_type)
In [30]: s_cat
Out[30]: 
0    NaN
1      b
2      c
3    NaN
dtype: category
Categories (3, object): ['b' < 'c' < 'd'] 

同样,CategoricalDtype可以与DataFrame一起使用,以确保所有列中的类别保持一致。

In [31]: from pandas.api.types import CategoricalDtype
In [32]: df = pd.DataFrame({"A": list("abca"), "B": list("bccd")})
In [33]: cat_type = CategoricalDtype(categories=list("abcd"), ordered=True)
In [34]: df_cat = df.astype(cat_type)
In [35]: df_cat["A"]
Out[35]: 
0    a
1    b
2    c
3    a
Name: A, dtype: category
Categories (4, object): ['a' < 'b' < 'c' < 'd']
In [36]: df_cat["B"]
Out[36]: 
0    b
1    c
2    c
3    d
Name: B, dtype: category
Categories (4, object): ['a' < 'b' < 'c' < 'd'] 

注意

要执行表格级别的转换,其中整个DataFrame中的所有标签都用作每列的类别,可以通过categories = pd.unique(df.to_numpy().ravel())来以编程方式确定categories参数。

如果你已经有了codescategories,你可以使用from_codes()构造函数,在正常构造模式下保存因子化步骤:

In [37]: splitter = np.random.choice([0, 1], 5, p=[0.5, 0.5])
In [38]: s = pd.Series(pd.Categorical.from_codes(splitter, categories=["train", "test"])) 

恢复原始数据

要恢复到原始的Series或 NumPy 数组,使用Series.astype(original_dtype)np.asarray(categorical)

In [39]: s = pd.Series(["a", "b", "c", "a"])
In [40]: s
Out[40]: 
0    a
1    b
2    c
3    a
dtype: object
In [41]: s2 = s.astype("category")
In [42]: s2
Out[42]: 
0    a
1    b
2    c
3    a
dtype: category
Categories (3, object): ['a', 'b', 'c']
In [43]: s2.astype(str)
Out[43]: 
0    a
1    b
2    c
3    a
dtype: object
In [44]: np.asarray(s2)
Out[44]: array(['a', 'b', 'c', 'a'], dtype=object) 

注意

与 R 的factor函数相反,分类数据不会将输入值转换为字符串;类别将以与原始值相同的数据类型结束。

注意

与 R 的factor函数相反,目前没有办法在创建时分配/更改标签。在创建后使用categories来更改类别。## CategoricalDtype

类别的类型完全由以下内容描述

  1. categories: 一个唯一值序列,没有缺失值
  2. ordered: 一个布尔值

这些信息可以存储在CategoricalDtype中。categories参数是可选的,这意味着在创建pandas.Categorical时,实际的类别应该从数据中存在的内容中推断出来。默认情况下,假定类别是无序的。

In [45]: from pandas.api.types import CategoricalDtype
In [46]: CategoricalDtype(["a", "b", "c"])
Out[46]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=False, categories_dtype=object)
In [47]: CategoricalDtype(["a", "b", "c"], ordered=True)
Out[47]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=True, categories_dtype=object)
In [48]: CategoricalDtype()
Out[48]: CategoricalDtype(categories=None, ordered=False, categories_dtype=None) 

CategoricalDtype可以在任何需要dtype的地方使用。例如pandas.read_csv()pandas.DataFrame.astype(),或者在Series构造函数中。

注意

作为一种便利,当你希望类别的默认行为是无序的,并且等于数组中存在的集合值时,可以在CategoricalDtype的位置使用字符串'category'。换句话说,dtype='category'等同于dtype=CategoricalDtype()

相等语义

当两个CategoricalDtype实例具有相同的类别和顺序时,它们比较相等。当比较两个无序的分类时,不考虑categories的顺序。

In [49]: c1 = CategoricalDtype(["a", "b", "c"], ordered=False)
# Equal, since order is not considered when ordered=False
In [50]: c1 == CategoricalDtype(["b", "c", "a"], ordered=False)
Out[50]: True
# Unequal, since the second CategoricalDtype is ordered
In [51]: c1 == CategoricalDtype(["a", "b", "c"], ordered=True)
Out[51]: False 

所有的CategoricalDtype实例都与字符串'category'相等。

In [52]: c1 == "category"
Out[52]: True 

描述

在分类数据上使用describe()将产生类似于string类型的SeriesDataFrame的输出。

In [53]: cat = pd.Categorical(["a", "c", "c", np.nan], categories=["b", "a", "c"])
In [54]: df = pd.DataFrame({"cat": cat, "s": ["a", "c", "c", np.nan]})
In [55]: df.describe()
Out[55]: 
 cat  s
count    3  3
unique   2  2
top      c  c
freq     2  2
In [56]: df["cat"].describe()
Out[56]: 
count     3
unique    2
top       c
freq      2
Name: cat, dtype: object 

使用类别

分类数据具有categoriesordered属性,列出了它们可能的值以及排序是否重要。这些属性被公开为s.cat.categoriess.cat.ordered。如果您不手动指定类别和排序,它们将从传递的参数中推断出来。

In [57]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
In [58]: s.cat.categories
Out[58]: Index(['a', 'b', 'c'], dtype='object')
In [59]: s.cat.ordered
Out[59]: False 

也可以按特定顺序传递类别:

In [60]: s = pd.Series(pd.Categorical(["a", "b", "c", "a"], categories=["c", "b", "a"]))
In [61]: s.cat.categories
Out[61]: Index(['c', 'b', 'a'], dtype='object')
In [62]: s.cat.ordered
Out[62]: False 

注意

新的分类数据不会自动排序。您必须显式传递ordered=True来指示有序的Categorical

注意

unique()的结果并不总是与Series.cat.categories相同,因为Series.unique()有一些保证,即它按照出现的顺序返回类别,并且仅包含实际存在的值。

In [63]: s = pd.Series(list("babc")).astype(CategoricalDtype(list("abcd")))
In [64]: s
Out[64]: 
0    b
1    a
2    b
3    c
dtype: category
Categories (4, object): ['a', 'b', 'c', 'd']
# categories
In [65]: s.cat.categories
Out[65]: Index(['a', 'b', 'c', 'd'], dtype='object')
# uniques
In [66]: s.unique()
Out[66]: 
['b', 'a', 'c']
Categories (4, object): ['a', 'b', 'c', 'd'] 

重命名类别

通过使用rename_categories()方法来重命名类别:

In [67]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
In [68]: s
Out[68]: 
0    a
1    b
2    c
3    a
dtype: category
Categories (3, object): ['a', 'b', 'c']
In [69]: new_categories = ["Group %s" % g for g in s.cat.categories]
In [70]: s = s.cat.rename_categories(new_categories)
In [71]: s
Out[71]: 
0    Group a
1    Group b
2    Group c
3    Group a
dtype: category
Categories (3, object): ['Group a', 'Group b', 'Group c']
# You can also pass a dict-like object to map the renaming
In [72]: s = s.cat.rename_categories({1: "x", 2: "y", 3: "z"})
In [73]: s
Out[73]: 
0    Group a
1    Group b
2    Group c
3    Group a
dtype: category
Categories (3, object): ['Group a', 'Group b', 'Group c'] 

注意

与 R 的factor相反,分类数据可以具有除字符串以外的其他类型的类别。

类别必须是唯一的,否则会引发ValueError

In [74]: try:
 ....:    s = s.cat.rename_categories([1, 1, 1])
 ....: except ValueError as e:
 ....:    print("ValueError:", str(e))
 ....: 
ValueError: Categorical categories must be unique 

类别也不能是NaN,否则会引���ValueError

In [75]: try:
 ....:    s = s.cat.rename_categories([1, 2, np.nan])
 ....: except ValueError as e:
 ....:    print("ValueError:", str(e))
 ....: 
ValueError: Categorical categories cannot be null 

追加新类别

可以通过使用add_categories()方法来追加类别:

In [76]: s = s.cat.add_categories([4])
In [77]: s.cat.categories
Out[77]: Index(['Group a', 'Group b', 'Group c', 4], dtype='object')
In [78]: s
Out[78]: 
0    Group a
1    Group b
2    Group c
3    Group a
dtype: category
Categories (4, object): ['Group a', 'Group b', 'Group c', 4] 

删除类别

通过使用remove_categories()方法可以删除类别。被删除的值将被np.nan替换。

In [79]: s = s.cat.remove_categories([4])
In [80]: s
Out[80]: 
0    Group a
1    Group b
2    Group c
3    Group a
dtype: category
Categories (3, object): ['Group a', 'Group b', 'Group c'] 

删除未使用的类别

也可以删除未使用的类别:

In [81]: s = pd.Series(pd.Categorical(["a", "b", "a"], categories=["a", "b", "c", "d"]))
In [82]: s
Out[82]: 
0    a
1    b
2    a
dtype: category
Categories (4, object): ['a', 'b', 'c', 'd']
In [83]: s.cat.remove_unused_categories()
Out[83]: 
0    a
1    b
2    a
dtype: category
Categories (2, object): ['a', 'b'] 


Pandas 2.2 中文官方教程和指南(十七)(2)https://developer.aliyun.com/article/1509823

相关文章
|
1月前
|
存储 JSON 数据格式
Pandas 使用教程 CSV - CSV 转 JSON
Pandas 使用教程 CSV - CSV 转 JSON
20 0
|
1月前
|
JSON 数据格式 Python
Pandas 使用教程 JSON
Pandas 使用教程 JSON
25 0
|
1月前
|
SQL 数据采集 JSON
Pandas 使用教程 Series、DataFrame
Pandas 使用教程 Series、DataFrame
33 0
|
3月前
|
数据采集 存储 数据可视化
Pandas高级教程:数据清洗、转换与分析
Pandas是Python的数据分析库,提供Series和DataFrame数据结构及数据分析工具,便于数据清洗、转换和分析。本教程涵盖Pandas在数据清洗(如缺失值、重复值和异常值处理)、转换(数据类型转换和重塑)和分析(如描述性统计、分组聚合和可视化)的应用。通过学习Pandas,用户能更高效地处理和理解数据,为数据分析任务打下基础。
233 3
|
4月前
|
存储 SQL 索引
Pandas 2.2 中文官方教程和指南(十一·二)(4)
Pandas 2.2 中文官方教程和指南(十一·二)
49 1
|
4月前
|
测试技术 索引 Python
Pandas 2.2 中文官方教程和指南(十一·二)(3)
Pandas 2.2 中文官方教程和指南(十一·二)
40 1
|
4月前
|
索引 Python
Pandas 2.2 中文官方教程和指南(十一·二)(2)
Pandas 2.2 中文官方教程和指南(十一·二)
38 1
|
4月前
|
索引 Python
Pandas 2.2 中文官方教程和指南(十一·二)(1)
Pandas 2.2 中文官方教程和指南(十一·二)
52 1
|
4月前
|
索引 Python
Pandas 2.2 中文官方教程和指南(一)(4)
Pandas 2.2 中文官方教程和指南(一)
44 0
|
4月前
|
存储 SQL JSON
Pandas 2.2 中文官方教程和指南(一)(3)
Pandas 2.2 中文官方教程和指南(一)
61 0