请选择 进入手机版 | 继续访问电脑版
楼主: oliyiyi
1491 3

Top 10 Coding Mistakes Made by Data Scientists [推广有奖]

版主

泰斗

0%

还不是VIP/贵宾

-

TA的文库  其他...

计量文库

威望
7
论坛币
272091 个
通用积分
31269.1753
学术水平
1435 点
热心指数
1554 点
信用等级
1345 点
经验
383778 点
帖子
9599
精华
66
在线时间
5466 小时
注册时间
2007-5-21
最后登录
2024-3-21

初级学术勋章 初级热心勋章 初级信用勋章 中级信用勋章 中级学术勋章 中级热心勋章 高级热心勋章 高级学术勋章 高级信用勋章 特级热心勋章 特级学术勋章 特级信用勋章

oliyiyi 发表于 2019-5-8 19:46:49 |显示全部楼层 |坛友微信交流群

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

By Norman Niemer, Chief Data Scientist

A data scientist is a "person who is better at statistics than any software engineer and better at software engineering than any statistician". Many data scientists have a statistics background and little experience with software engineering. I'm a senior data scientist ranked top 1% on Stackoverflow for python coding and work with a lot of (junior) data scientists. Here is my list of 10 common mistakes I frequently see.


1. Don't share data referenced in code


Data science needs code AND data. So for someone else to be able to reproduce your results, they need to have access to the data. Seems basic but a lot of people forget to share the data with their code.

import pandas as pddf1 = pd.read_csv('file-i-dont-have.csv') # failsdo_stuff(df)

Solution: Use d6tpipe to share data files with your code or upload to S3/web/google drive etc or save to a database so the recipient can retrieve files (but don't add them to git, see below).


2. Hardcode inaccessible paths


Similar to mistake 1, if you hardcode paths others don't have access to, they can't run your code and have to look in lots of places to manually change paths. Booo!

import pandas as pddf = pd.read_csv('/path/i-dont/have/data.csv') # failsdo_stuff(df)# or import osos.chdir('c:\\Users\\yourname\\desktop\\python') # fails

Solution: Use relative paths, global path config variables or d6tpipe to make your data easily accessible.


3. Mix data with code


Since data science code needs data why not dump it in the same directory? And while you are at it, save images, reports and other junk there too. Yikes, what a mess!

├── data.csv├── ingest.py├── other-data.csv├── output.png├── report.html└── run.py

Solution: Organize your directory into categories, like data, reports, code etc. See Cookiecutter Data Science or d6tflow project templates (see #5) and use tools mentioned in #1 to store and share data.


4. Git commit data with source code


Most people now version control their code (if you don't that's another mistake!! See git). In an attempt to share data, it might be tempting to add data files to version control. That's ok for very small files but git is not optimized for data, especially large files.

git add data.csv

Solution: Use tools mentioned in #1 to store and share data. If you really want to version control data, see d6tpipe, DVC and Git Large File Storage.


5. Write functions instead of DAGs


Enough about data, lets talk about the actual code! Since one of the first things you learn when you learn to code are functions, data science code is mostly organized as a series of functions that are run linearly. That causes several problems, see 4 Reasons Why Your Machine Learning Code is Probably Bad.

def process_data(data, parameter):    data = do_stuff(data)    data.to_pickle('data.pkl')data = pd.read_csv('data.csv')process_data(data)df_train = pd.read_pickle(df_train)model = sklearn.svm.SVC()model.fit(df_train.iloc[:,:-1], df_train['y'])

Solution: Instead of linearly chaining functions, data science code is better written as a set of tasks with dependencies between them. Use d6tflow or airflow.


6. Write for loops


Like functions, for loops are the first thing you learn when you learn to code. Easy to understand, but they are slow and excessively wordy, typically indicating you are unaware of vectorized alternatives.

x = range(10)avg = sum(x)/len(x); std = math.sqrt(sum((i-avg)**2 for i in x)/len(x));zscore = [(i-avg)/std for x]# should be: scipy.stats.zscore(x)# orgroupavg = []for i in df['g'].unique():        dfg = df[df[g']==i]        groupavg.append(dfg['g'].mean())# should be: df.groupby('g').mean()

Solution: Numpy, scipy and pandas have vectorized functions for most things that you think might require for loops.


7. Don't write unit tests


As data, parameters or user input change, your code might break, sometimes without you noticing. That can lead to bad output and if someone makes decisions based on your output, bad data will lead to bad decisions!

Solution: Use assert statements to check for data quality. pandas has equality tests, d6tstack has checks for data ingestion and d6tjoin for data joins. Code for example data checks:

assert df['id'].unique().shape[0] == len(ids) # have data for all ids?assert df.isna().sum()<0.9 # catch missing valuesassert df.groupby(['g','date']).size().max() ==1 # no duplicate values/date?assert d6tjoin.utils.PreJoin([df1,df2],['id','date']).is_all_matched() # all ids matched?


8. Don't document code


I get it, you're in a hurry to produce some analysis. You hack things together to get results to your client or boss. Then a week later they come back and say "can you change xyz" or "can you update this please". You look at your code and can't remember why you did what you did. And now imagine someone else has to run it.

def some_complicated_function(data):        data = data[data['column']!='wrong']        data = data.groupby('date').apply(lambda x: complicated_stuff(x))        data = data[data['value']<0.9]        return data

Solution: Take the extra time, even if it's after you've delivered the analysis, to document what you did. You will thank yourself and other will do so even more! You'll look like a pro!


9. Save data as csv or pickle


Back data, it's DATA science after all. Just like functions and for loops, CSVs and pickle files are commonly used but they are actually not very good. CSVs don't include a schema so everyone has to parse numbers and dates again. Pickles solve that but only work in python and are not compressed. Both are not good formats to store large datasets.

def process_data(data, parameter):    data = do_stuff(data)    data.to_pickle('data.pkl')data = pd.read_csv('data.csv')process_data(data)df_train = pd.read_pickle(df_train)

Solution: Use parquet or other binary data formats with data schemas, ideally ones that compress data. d6tflowautomatically saves data output of tasks as parquet so you don't have to deal with it.


10. Use jupyter notebooks


Lets conclude with a controversial one: jupyter notebooks are as common as CSVs. A lot of people use them. That doesn't make them good. Jupyter notebooks promote a lot of bad software engineering habits mentioned above, notably:

  • You are tempted to dump all files into one directory
  • You write code that runs top-bottom instead of DAGs
  • You don't modularize your code
  • Difficult to debug
  • Code and output gets mixed in one file
  • They don't version control well

It feels easy to get started but scales poorly.

Solution: Use pycharm and/or spyder.


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝


缺少币币的网友请访问有奖回帖集合
https://bbs.pinggu.org/thread-3990750-1-1.html
davil2000 发表于 2019-6-19 16:15:36 |显示全部楼层 |坛友微信交流群
好帖一篇
已有 1 人评分论坛币 收起 理由
oliyiyi + 5 精彩帖子

总评分: 论坛币 + 5   查看全部评分

使用道具

piiroja 发表于 2020-3-19 13:29:34 |显示全部楼层 |坛友微信交流群
thx for sharing~
已有 1 人评分论坛币 收起 理由
oliyiyi + 5 精彩帖子

总评分: 论坛币 + 5   查看全部评分

使用道具

三重虫 发表于 2021-10-30 10:24:21 |显示全部楼层 |坛友微信交流群

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-3-29 02:51