看板 DataScience 關於我們 聯絡資訊
language:python 3.7 資料筆數:2730萬筆 約1.5G 檔案格式:CSV檔,資料集如下 我想要進行groupby df_login_count = df.groupby(['year', 'month', 'day', 'userid'], as_index=False)['count'].count() df_login_count.to_csv('login_count.csv',index = False) 但資料量實在太大,處理非常的久 想請問各位前輩有什麼建議的解法嗎 給小弟些keyword 先感謝各位了 year month day time clftp1 SessionID user user_id 2019 Mar 27 23:21:16 clftp1 ftpd[5376]: USER fXXex 2019 Mar 27 23:21:16 clftp1 ftpd[5379]: USER umX 2019 Mar 27 23:21:17 clftp1 ftpd[5380]: USER umX 2019 Mar 27 23:21:17 clftp1 ftpd[5383]: USER umX 2019 Mar 27 23:21:18 clftp1 ftpd[5385]: USER umX 2019 Mar 27 23:21:18 clftp1 ftpd[5388]: USER umX 2019 Mar 27 23:21:19 clftp1 ftpd[5389]: USER umX 2019 Mar 27 23:21:19 clftp1 ftpd[5392]: USER umX 2019 Mar 27 23:21:20 clftp1 ftpd[5394]: USER umX 2019 Mar 27 23:21:23 clftp1 ftpd[5402]: USER dXX_ft 2019 Mar 27 23:21:45 clftp1 ftpd[5462]: USER sXXXon 2019 Mar 27 23:21:51 clftp1 ftpd[5476]: USER oXXX_m 2019 Mar 27 23:21:59 clftp1 ftpd[5497]: USER sXXXon 2019 Mar 27 23:22:01 clftp1 ftpd[5503]: USER sXXXon 2019 Mar 27 23:22:02 clftp1 ftpd[5505]: USER sXXXon 2019 Mar 27 23:22:04 clftp1 ftpd[5509]: USER sXXXon 2019 Mar 27 23:22:26 clftp1 ftpd[5559]: USER vtXXXrm 2019 Mar 27 23:22:27 clftp1 ftpd[5563]: USER vtXXXrm 2019 Mar 27 23:22:28 clftp1 ftpd[5568]: USER vtXXXrm -- ※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 114.137.193.101 (臺灣) ※ 文章網址: https://www.ptt.cc/bbs/DataScience/M.1579163874.A.E1C.html
ebullient: 把時間跟使用者id當字串接起來算nunique看看 01/16 20:58
drajan: 試試看modin 01/17 00:07
CPBLWANG5566: 存到sqlite或database作,沒必要ㄧ定用pandas。27M 01/18 10:57
CPBLWANG5566: 以database來說是小數目 01/18 10:57
youngman77: 不用pandas的話..用bash: cut -f1,2,3,8|sort|uniq -c 01/18 20:34
ctr1: 感謝樓上 我用modin跟db分別做看看 01/19 01:32