您的位置:首页 > 编程语言 > Python开发

利用Python进行数据分析--数据规整化:清理、转换、合并、重塑

2014-11-19 22:48 1291 查看
转载自:http://blog.csdn.net/ssw_1990/article/details/26565069

1、数据转换

目前为止介绍的都是数据的重排。另一类重要操作则是过滤、清理以及其他的转换工作。

2、移除重复数据

DataFrame中常常会出现重复行。下面就是一个例子:

[python] view
plaincopy





In [4]: data = pd.DataFrame({'k1':['one'] * 3 + ['two'] * 4,

'k2':[1, 1, 2, 3, 3, 4, 4]})

In [5]: data

Out[5]:

k1 k2

0 one 1

1 one 1

2 one 2

3 two 3

4 two 3

5 two 4

6 two 4

[7 rows x 2 columns]

DataFrame的duplicated方法返回一个布尔型Series,表示各行是否是重复行:

[python] view
plaincopy





In [6]: data.duplicated()

Out[6]:

0 False

1 True

2 False

3 False

4 True

5 False

6 True

dtype: bool

还有一个与此相关的drop_duplicates方法,它用于返回一个移除了重复行的DataFrame:

[python] view
plaincopy





In [7]: data.drop_duplicates()

Out[7]:

k1 k2

0 one 1

2 one 2

3 two 3

5 two 4

[4 rows x 2 columns]

这两个方法默认会判断全部列,你也可以指定部分列进行重复项判断。假设你还有一列值,且只希望根据k1列过滤重复项:

[python] view
plaincopy





In [8]: data['v1'] = range(7)

In [9]: data

Out[9]:

k1 k2 v1

0 one 1 0

1 one 1 1

2 one 2 2

3 two 3 3

4 two 3 4

5 two 4 5

6 two 4 6

[7 rows x 3 columns]

In [10]: data.drop_duplicates(['k1'])

Out[10]:

k1 k2 v1

0 one 1 0

3 two 3 3

[2 rows x 3 columns]

duplicated和drop_duplicates默认保留的是第一个出现的值组合。传入take_last=True则保留最后一个:

[python] view
plaincopy





In [11]: data.drop_duplicates(['k1', 'k2'], take_last=True)

Out[11]:

k1 k2 v1

1 one 1 1

2 one 2 2

4 two 3 4

6 two 4 6

[4 rows x 3 columns]

3、利用函数或映射进行数据转换

在对数据集进行转换时,你可能希望根据数组、Series或DataFrame列中的值来实现该转换工作。我们来看看下面这组有关肉类的数据:

[python] view
plaincopy





In [12]: data = pd.DataFrame({'food':['bacon', 'pulled pork', 'bacon', 'Pastrami', 'corned beef', 'Bacon',

'pastrami', 'honey ham', 'nova lox'],

....: 'ounces':[4, 3, 12, 6, 7.5, 8, 3, 5, 6]})

In [13]: data

Out[13]:

food ounces

0 bacon 4.0

1 pulled pork 3.0

2 bacon 12.0

3 Pastrami 6.0

4 corned beef 7.5

5 Bacon 8.0

6 pastrami 3.0

7 honey ham 5.0

8 nova lox 6.0

[9 rows x 2 columns]

假设你想要添加一列表示该肉类食物来源的动物类型。我们先编写一个肉类到动物的映射:

[python] view
plaincopy





In [14]: meat_to_animal = {

....: 'bacon': 'pig',

....: 'pulled pork': 'pig',

....: 'pastrami': 'cow',

....: 'corned beef': 'cow',

....: 'honey ham': 'pig',

....: 'nova lox': 'salmon'

....: }

Series的map方法可以接受一个函数或含有映射关系的字典型对象,但是这里有一个小问题,即有些肉类的首字母大写了,而另一些则没有。因此,我们还需要将各个值转换为小写:

[python] view
plaincopy





In [15]: data['animal'] = data['food'].map(str.lower).map(meat_to_animal)

In [16]: data

Out[16]:

food ounces animal

0 bacon 4.0 pig

1 pulled pork 3.0 pig

2 bacon 12.0 pig

3 Pastrami 6.0 cow

4 corned beef 7.5 cow

5 Bacon 8.0 pig

6 pastrami 3.0 cow

7 honey ham 5.0 pig

8 nova lox 6.0 salmon

[9 rows x 3 columns]

我们也可以传入一个能够完成全部这些工作的函数:

[python] view
plaincopy





In [17]: data['food'].map(lambda x: meat_to_animal[x.lower()])

Out[17]:

0 pig

1 pig

2 pig

3 cow

4 cow

5 pig

6 cow

7 pig

8 salmon

Name: food, dtype: object

说明:

使用map是一种实现元素级转换以及其他数据清理工作的便捷方式。

4、替换值

利用fillna方法填充缺失数据可以看做值替换的一种特殊情况。虽然前面提到的mao可用于修改对象的数据子集,而replace则提供了一种实现该功能的更简单、更灵活的方式。我们来看看下面这个Series:

[python] view
plaincopy





In [18]: data = pd.Series([1., -999, 2., -999, -1000., 3.])

In [19]: data

Out[19]:

0 1

1 -999

2 2

3 -999

4 -1000

5 3

dtype: float64

-999这个值可能是一个表示缺失数据的标记值。要将其替换为pandas能够理解的NA值,我们可以利用replace来产生一个新的Series:

[python] view
plaincopy





In [20]: data.replace(-999, np.nan)

Out[20]:

0 1

1 NaN

2 2

3 NaN

4 -1000

5 3

dtype: float64

如果你希望一次性替换多个值,可以传入一个由待替换值组成的列表以及一个替换值:

[python] view
plaincopy





In [21]: data.replace([-999, -1000], np.nan)

Out[21]:

0 1

1 NaN

2 2

3 NaN

4 NaN

5 3

dtype: float64

如果希望对不同的值进行不同的替换,则传入一个由替换关系组成的列表即可:

[python] view
plaincopy





In [22]: data.replace([-999, -1000], [np.nan, 0])

Out[22]:

0 1

1 NaN

2 2

3 NaN

4 0

5 3

dtype: float64

传入的参数也可以是字典:

[python] view
plaincopy





In [23]: data.replace({-999: np.nan, -1000: 0})

Out[23]:

0 1

1 NaN

2 2

3 NaN

4 0

5 3

dtype: float64

5、重命名轴索引

跟Series中的值一样,轴标签也可以通过函数或映射进行转换,从而得到一个新对象。轴还可以被就地修改,而无需新建一个数据结构。接下来看看下面这个简单的例子:

[python] view
plaincopy





In [24]: data = pd.DataFrame(np.arange(12).reshape((3, 4)),

....: index=['Ohio', 'Colorado', 'New York'],

....: columns=['one', 'two', 'three', 'four'])

跟Series一样,轴标签也有一个map方法:

[python] view
plaincopy





In [25]: data.index.map(str.upper)

Out[25]: array(['OHIO', 'COLORADO', 'NEW YORK'], dtype=object)

你可以将其赋值给index,这样就可以对DataFrame进行就地修改了:

[python] view
plaincopy





In [26]: data.index = data.index.map(str.upper)

In [27]: data

Out[27]:

one two three four

OHIO 0 1 2 3

COLORADO 4 5 6 7

NEW YORK 8 9 10 11

[3 rows x 4 columns]

如果想要创建数据集的转换版(而不是修改原始数据),比较使用的方式是rename:

[python] view
plaincopy





In [28]: data.rename(index=str.title, columns=str.upper)

Out[28]:

ONE TWO THREE FOUR

Ohio 0 1 2 3

Colorado 4 5 6 7

New York 8 9 10 11

[3 rows x 4 columns]

特别说明一下,rename可以结合字典型对象实现对部分轴标签的更新:

[python] view
plaincopy





In [31]: data.rename(index={'OHIO': 'INDIANA'},

columns={'three': 'peekaboo'})

Out[31]:

one two peekaboo four

INDIANA 0 1 2 3

COLORADO 4 5 6 7

NEW YORK 8 9 10 11

[3 rows x 4 columns]

rename帮我们实现了:复制DataFrame并对其索引和列标签进行赋值。如果希望就地修改某个数据集,传入inplace=True即可:

[python] view
plaincopy





In [32]: # 总是返回DataFrame的引用

In [33]: _ = data.rename(index={'OHIO': 'INDIANA'}, inplace=True)

In [34]: data

Out[34]:

one two three four

INDIANA 0 1 2 3

COLORADO 4 5 6 7

NEW YORK 8 9 10 11

[3 rows x 4 columns]

6、离散化和面元划分

为了便于分析,连续数据常常被离散化或拆分为“面元”(bin)。假设有一组人员数据,而你希望将它们划分为不同的年龄组:

[python] view
plaincopy





In [35]: ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]

接下来将这些数据划分为“18到25”、“26到35”、“35到60”以及“60以上”几个面元。要实现该功能,你需要使用pandas的cut函数:

[python] view
plaincopy





In [36]: bins = [18, 25, 35, 60, 100]

In [37]: cats = pd.cut(ages, bins)

In [38]: cats

Out[38]:

(18, 25]

(18, 25]

(18, 25]

(25, 35]

(18, 25]

(18, 25]

(35, 60]

(25, 35]

(60, 100]

(35, 60]

(35, 60]

(25, 35]

Levels (4): Index(['(18, 25]', '(25, 35]', '(35, 60]', '(60, 100]'], dtype=object)

pandas返回的是一个特殊的Categorical对象。你可以将其看做一组表示面元名称的字符串。实际上,它含有一个表示不同分类名称的levels数组以及一个为年龄数据进行标号的labels属性:

[python] view
plaincopy





In [39]: cats.labels

Out[39]: array([0, 0, 0, 1, 0, 0, 2, 1, 3, 2, 2, 1])

In [40]: cats.levels

Out[40]: Index([u'(18, 25]', u'(25, 35]', u'(35, 60]', u'(60, 100]'], dtype='object')

In [41]: pd.value_counts(cats)

Out[41]:

(18, 25] 5

(35, 60] 3

(25, 35] 3

(60, 100] 1

dtype: int64

跟“区间”的数学符号一样,园括号表示开端,而方括号则表示闭断(包括)。哪边是闭端可以通过right=False进行修改:

[python] view
plaincopy





In [42]: pd.cut(ages, [18, 26, 36, 61, 100], right=False)

Out[42]:

[18, 26)

[18, 26)

[18, 26)

[26, 36)

[18, 26)

[18, 26)

[36, 61)

[26, 36)

[61, 100)

[36, 61)

[36, 61)

[26, 36)

Levels (4): Index(['[18, 26)', '[26, 36)', '[36, 61)', '[61, 100)'], dtype=object)

你也可以设置自己的面元名称,将labels选项设置为一个列表或数组即可:

[python] view
plaincopy





In [43]: group_names = ['Youth', 'YoungAdult', 'MiddleAged', 'Senior']

In [44]: pd.cut(ages, bins, labels=group_names)

Out[44]:

Youth

Youth

Youth

YoungAdult

Youth

Youth

MiddleAged

YoungAdult

Senior

MiddleAged

MiddleAged

YoungAdult

Levels (4): Index(['Youth', 'YoungAdult', 'MiddleAged', 'Senior'], dtype=object)

如果向cut传入的是面元的数量而不是确切的面元边界,则它会根据数据的最小值和最大值计算等长面元。下面这个例子中,我们将一些均匀分布的数据分成四组:

[python] view
plaincopy





In [45]: data = np.random.rand(20)

In [46]: pd.cut(data, 4, precision=2)

Out[46]:

(0.037, 0.26]

(0.037, 0.26]

(0.48, 0.7]

(0.7, 0.92]

(0.037, 0.26]

(0.037, 0.26]

(0.7, 0.92]

(0.7, 0.92]

(0.037, 0.26]

(0.26, 0.48]

(0.26, 0.48]

(0.26, 0.48]

(0.037, 0.26]

(0.26, 0.48]

(0.48, 0.7]

(0.7, 0.92]

(0.037, 0.26]

(0.7, 0.92]

(0.037, 0.26]

(0.037, 0.26]

Levels (4): Index(['(0.037, 0.26]', '(0.26, 0.48]', '(0.48, 0.7]',

'(0.7, 0.92]'], dtype=object)

qcut是一个非常类似于cut的函数,它可以根据样本分位数对数据进行面元划分。根据数据的分布情况,cut可能无法使各个面元中含有相同数量的数据点。而qcut由于使用的是样本分位数,因此可以得到大小基本相等的面元:

[python] view
plaincopy





In [48]: data = np.random.randn(1000) # 正态分布

In [49]: cats = pd.qcut(data, 4) # 按四分位数进行分隔

In [50]: cats

Out[50]:

[-3.636, -0.717]

(0.647, 3.531]

[-3.636, -0.717]

[-3.636, -0.717]

[-3.636, -0.717]

(0.647, 3.531]

[-3.636, -0.717]

(-0.717, -0.0323]

(-0.717, -0.0323]

(0.647, 3.531]

[-3.636, -0.717]

(-0.717, -0.0323]

(0.647, 3.531]

...

[-3.636, -0.717]

[-3.636, -0.717]

(0.647, 3.531]

(-0.717, -0.0323]

(0.647, 3.531]

[-3.636, -0.717]

[-3.636, -0.717]

(-0.0323, 0.647]

[-3.636, -0.717]

(-0.717, -0.0323]

(-0.717, -0.0323]

(-0.0323, 0.647]

(0.647, 3.531]

Levels (4): Index(['[-3.636, -0.717]', '(-0.717, -0.0323]',

'(-0.0323, 0.647]', '(0.647, 3.531]'], dtype=object)

Length: 1000

In [51]: pd.value_counts(cats)

Out[51]:

(-0.717, -0.0323] 250

(-0.0323, 0.647] 250

(0.647, 3.531] 250

[-3.636, -0.717] 250

dtype: int64

跟cut一样,也可以设置自定义的分位数(0到1之间的数值,包含端点):

[python] view
plaincopy





In [52]: pd.qcut(data, [0, 0.1, 0.5, 0.9, 1.])

Out[52]:

(-1.323, -0.0323]

(-0.0323, 1.234]

(-1.323, -0.0323]

[-3.636, -1.323]

[-3.636, -1.323]

(-0.0323, 1.234]

(-1.323, -0.0323]

(-1.323, -0.0323]

(-1.323, -0.0323]

(1.234, 3.531]

(-1.323, -0.0323]

(-1.323, -0.0323]

(-0.0323, 1.234]

...

[-3.636, -1.323]

(-1.323, -0.0323]

(-0.0323, 1.234]

(-1.323, -0.0323]

(-0.0323, 1.234]

[-3.636, -1.323]

(-1.323, -0.0323]

(-0.0323, 1.234]

(-1.323, -0.0323]

(-1.323, -0.0323]

(-1.323, -0.0323]

(-0.0323, 1.234]

(-0.0323, 1.234]

Levels (4): Index(['[-3.636, -1.323]', '(-1.323, -0.0323]',

'(-0.0323, 1.234]', '(1.234, 3.531]'], dtype=object)

Length: 1000

说明:

稍后在讲解聚合和分组运算时会再次用到cut和qcut,因为这两个离散化函数对分量和分组分析非常重要。

7、检测和过滤异常值

异常值(outlier)的过滤或变换运算在很大程度上其实就是数组运算。来看一个含有正态分布数据的DataFrame:

[python] view
plaincopy





In [53]: np.random.seed(12345)

In [54]: data = pd.DataFrame(np.random.randn(1000, 4))

In [55]: data.describe()

Out[55]:

0 1 2 3

count 1000.000000 1000.000000 1000.000000 1000.000000

mean -0.067684 0.067924 0.025598 -0.002298

std 0.998035 0.992106 1.006835 0.996794

min -3.428254 -3.548824 -3.184377 -3.745356

25% -0.774890 -0.591841 -0.641675 -0.644144

50% -0.116401 0.101143 0.002073 -0.013611

75% 0.616366 0.780282 0.680391 0.654328

max 3.366626 2.653656 3.260383 3.927528

[8 rows x 4 columns]

假设你想要找出某列中绝对值大小超过3的值:

[python] view
plaincopy





In [56]: col = data[3]

In [57]: col[np.abs(col) > 3]

Out[57]:

97 3.927528

305 -3.399312

400 -3.745356

Name: 3, dtype: float64

要选出全部含有“超过3或-3的值”的行,你可以利用布尔型DataFrame以及any方法:

[python] view
plaincopy





In [58]: data[(np.abs(data) > 3).any(1)]

Out[58]:

0 1 2 3

5 -0.539741 0.476985 3.248944 -1.021228

97 -0.774363 0.552936 0.106061 3.927528

102 -0.655054 -0.565230 3.176873 0.959533

305 -2.315555 0.457246 -0.025907 -3.399312

324 0.050188 1.951312 3.260383 0.963301

400 0.146326 0.508391 -0.196713 -3.745356

499 -0.293333 -0.242459 -3.056990 1.918403

523 -3.428254 -0.296336 -0.439938 -0.867165

586 0.275144 1.179227 -3.184377 1.369891

808 -0.362528 -3.548824 1.553205 -2.186301

900 3.366626 -2.372214 0.851010 1.332846

[11 rows x 4 columns]

根据这些条件,即可轻松地对值进行设置。下面的代码可以将值限制在区间-3到3以内:

[python] view
plaincopy





In [59]: data[np.abs(data) > 3] = np.sign(data) * 3

In [60]: data.describe()

Out[60]:

0 1 2 3

count 1000.000000 1000.000000 1000.000000 1000.000000

mean -0.067623 0.068473 0.025153 -0.002081

std 0.995485 0.990253 1.003977 0.989736

min -3.000000 -3.000000 -3.000000 -3.000000

25% -0.774890 -0.591841 -0.641675 -0.644144

50% -0.116401 0.101143 0.002073 -0.013611

75% 0.616366 0.780282 0.680391 0.654328

max 3.000000 2.653656 3.000000 3.000000

[8 rows x 4 columns]

说明:

np.sign这个ufunc返回的是一个由1和-1组成的数组,表示原始值的符号。

8、排列和随机采样

利用numpy.random.permutation函数可以轻松实现对Series或DataFrame的列的排列工作(permuting,随机重排序)。通过需要排列的轴的长度调用permutation,可产生一个表示新顺序的整数数组:

[python] view
plaincopy





In [61]: df = pd.DataFrame(np.arange(5 * 4).reshape(5, 4))

In [62]: sampler = np.random.permutation(5)

In [63]: sampler

Out[63]: array([1, 0, 2, 3, 4])

然后就可以在基于ix的索引操作或take函数中使用该数组了:

[python] view
plaincopy





In [64]: df

Out[64]:

0 1 2 3

0 0 1 2 3

1 4 5 6 7

2 8 9 10 11

3 12 13 14 15

4 16 17 18 19

[5 rows x 4 columns]

In [65]: df.take(sampler)

Out[65]:

0 1 2 3

1 4 5 6 7

0 0 1 2 3

2 8 9 10 11

3 12 13 14 15

4 16 17 18 19

[5 rows x 4 columns]

如果不想用替换的方式选取随机子集,则可以使用permutation:从permutation返回的数组中切下前k个元素,其中k为期望的子集大小。虽然有很多高效的算法可以实现非替换式采样,但是手边就有的工具为什么不用呢?

[python] view
plaincopy





In [66]: df.take(np.random.permutation(len(df))[:3])

Out[66]:

0 1 2 3

1 4 5 6 7

3 12 13 14 15

4 16 17 18 19

[3 rows x 4 columns]

要通过替换的方式产生样本,最快的方式是通过np.random.randint得到一组随机整数:

[python] view
plaincopy





In [67]: bag = np.array([5, 7, -1, 6, 4])

In [68]: sampler = np.random.randint(0, len(bag), size=10)

In [69]: sampler

Out[69]: array([4, 4, 2, 2, 2, 0, 3, 0, 4, 1])

In [70]: draws = bag.take(sampler)

In [71]: draws

Out[71]: array([ 4, 4, -1, -1, -1, 5, 6, 5, 4, 7])

9、计算指标/哑变量

另一种常用于统计建模或机器学习的转换方式是:将分类变量(categorical variable)转换为“哑变量矩阵”(dummy matrix)或“指标矩阵”(indicator matrix)。如果DataFrame的某一列中含有k个不同的值,则可以派生出一个k列矩阵或DataFrame(其值全为1和0)。pandas有一个get_dummies函数可以实现该功能(其实自己动手做一个也不难)。拿之前的一个例子来说:

[python] view
plaincopy





In [72]: df = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],

....: 'data1': range(6)})

In [73]: pd.get_dummies(df['key'])

Out[73]:

a b c

0 0 1 0

1 0 1 0

2 1 0 0

3 0 0 1

4 1 0 0

5 0 1 0

[6 rows x 3 columns]

有时候,你可能想给指标DataFrame的列加上一个前缀,以便能够跟其他数据进行合并。get_dummies的prefix参数可以实现该功能:

[python] view
plaincopy





In [74]: dummies = pd.get_dummies(df['key'], prefix='key')

In [75]: df_with_dummy = df[['data1']].join(dummies)

In [76]: df_with_dummy

Out[76]:

data1 key_a key_b key_c

0 0 0 1 0

1 1 0 1 0

2 2 1 0 0

3 3 0 0 1

4 4 1 0 0

5 5 0 1 0

[6 rows x 4 columns]

如果DataFrame中的某行同属于多个分类,则事情就会有点复杂。根据MovieLens 1M数据集,如下所示:

[python] view
plaincopy





In [77]: mnames = ['movie_id', 'title', 'genres']

In [78]: movies = pd.read_table('movies.dat', sep='::', header=None,

.....: names=mnames)

In [79]: movies[:10]

Out[79]:

movie_id title genres

0 1 Toy Story (1995) Animation|Children's|Comedy

1 2 Jumanji (1995) Adventure|Children's|Fantasy

2 3 Grumpier Old Men (1995) Comedy|Romance

3 4 Waiting to Exhale (1995) Comedy|Drama

4 5 Father of the Bride Part II (1995) Comedy

5 6 Heat (1995) Action|Crime|Thriller

6 7 Sabrina (1995) Comedy|Romance

7 8 Tom and Huck (1995) Adventure|Children's

8 9 Sudden Death (1995) Action

9 10 GoldenEye (1995) Action|Adventure|Thriller

要为每个genre添加指标变量就需要做一些数据规整操作。首先,我们从数据集中抽取出不同的genre值(注意巧用set.union):

[python] view
plaincopy





In [80]: genre_iter = (set(x.split('|')) for x in movies.genres)

In [81]: genres = sorted(set.union(*genre_iter))

现在,我们从一个全零DataFrame开始构建指标DataFrame:

[python] view
plaincopy





In [82]: dummies = DataFrame(np.zeros((len(movies), len(genres))), columns=genres)

接下来,迭代每一部电影并将dummies各行的项设置为1:

[python] view
plaincopy





In [83]: for i, gen in enumerate(movies.genres):

.....: dummies.ix[i, gen.split('|')] = 1

然后,再将其与movies合并起来:

[python] view
plaincopy





In [84]: movies_windic = movies.join(dummies.add_prefix('Genre_'))

In [85]: movies_windic.ix[0]

Out[85]:

movie_id 1

title Toy Story (1995)

genres Animation|Children's|Comedy

Genre_Action 0

Genre_Adventure 0

Genre_Animation 1

Genre_Children's 1

Genre_Comedy 1

Genre_Crime 0

Genre_Documentary 0

Genre_Drama 0

Genre_Fantasy 0

Genre_Film-Noir 0

Genre_Horror 0

Genre_Musical 0

Genre_Mystery 0

Genre_Romance 0

Genre_Sci-Fi 0

Genre_Thriller 0

Genre_War 0

Genre_Western 0

Name: 0

注意:

对于很大的数据,用这种方式构建多成员指标变量就会变得非常慢。肯定需要编写一个能够利用DataFrame内部机制的更低级的函数才行。

一个对统计应用有用的秘诀是:结合get_dummies和诸如cut之类的离散化函数。

[python] view
plaincopy





In [86]: values = np.random.rand(10)

In [87]: values

Out[87]:

array([ 0.75603383, 0.90830844, 0.96588737, 0.17373658, 0.87592824,

0.75415641, 0.163486 , 0.23784062, 0.85564381, 0.58743194])

In [88]: bins = [0, 0.2, 0.4, 0.6, 0.8, 1]

In [89]: pd.get_dummies(pd.cut(values, bins))

Out[89]:

(0, 0.2] (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] (0.8, 1]

0 0 0 0 1 0

1 0 0 0 0 1

2 0 0 0 0 1

3 1 0 0 0 0

4 0 0 0 0 1

5 0 0 0 1 0

6 1 0 0 0 0

7 0 1 0 0 0

8 0 0 0 0 1

9 0 0 1 0 0

[10 rows x 5 columns]
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐