首页 > 数据分析 > Slickdeals:用户感兴趣的是什么?

[悬赏]Slickdeals:用户感兴趣的是什么? (已翻译11%)

查看 (210次)
英文原文:Slickdeals: What Deals are Users Interested In?
标签: 数据分析
admin 发布于 2017-06-02 09:40:08 (共 9 段, 本文赏金: 14元)
参与翻译(1人): cyt5969858 默认 | 原文

【待悬赏】 赏金: 2元

You may have heard of a site named SlickDeals. As a site with more than ten million monthly users, this deal-sharing site is a hot spot for people to share and pass judgment on offers and discounts for a huge variety of things. Ever since the early days of college, I have been visiting this site almost daily to keep up with prices for items of interest. As our boot camp cohort at NYC Data Science learned about web scraping, I felt that it would be a great idea to play around and see what more I could learn about this popular deal-sharing website.

Note: If you are uninterested in the programming aspect and are more interested in the findings, please skip the data, scrapy, and cleaning portions of this post.



【待悬赏】 赏金: 2元

The Data

Preliminary Variable Seeking

Since SlickDeals is largely a community-driven website, what better question is there to ask than what is popular with the users? In order to measure popularity, I wanted numerical values that were able to capture that.

 

Sample deal post on SlickDeals

Taking a look at a random deal page, I found that there were two such variables: view count, and deal score. So now that I have my dependent variables, I needed independent values that I could compare to the two. Including view count and deal score, I ended up with a total of 15 variables that I wanted to have in my data set (see Scrapy section). But how would I get this into a table format that is easy to work with?



【待悬赏】 赏金: 2元

Scrapy

This is where the Python-based Scrapy comes in handy. As described by its official GitHub repository:

Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

So now that I figured out the information I want to extract from the website, I needed to tell Scrapy how I want it to approach this. If you are interested in learning how to use Scrapy, I recommend checking out this tutorial. Here is the summary of what I had to do with my spider:



【待悬赏】 赏金: 1元

Workflow

In general, the Scrapy Spider needs to know how you want to approach scraping each element. Mine needed to do the following:

1. Login Authorization

SlickDeals uses a forum structure for its deals, which came with one major problem: only members could see all posts. After going through a myriad of suggested solutions, I ended up finding a working solution from the example in the tutorial I provided earlier. The code looks like this:



【待悬赏】 赏金: 1元

2. Main Parse

Each deal has it's own thread/post on the forums. I wanted information on the Hot Deals section, so I needed to tell it to make requests for each of these thread pages. When it is done collecting information from each thread on the page, the Spider needs to go find the next page and extract from the deals there. The general workflow instruction was written as shown below:



【待悬赏】 赏金: 1元

3. Parsing Elements in Each Deal Page

Now for the meat of this entire process. To get each element or variable of interest, the Spider needs to store the results of XPath Selectors:

 

Now that the Spider is set up, I need Scrapy to output a file for me. Using an item pipeline, I had Scrapy dump a .csv output file with these columns. It took a lot of trial and error, but after many hours I was rewarded with an output data set.



【待悬赏】 赏金: 3元

Cleaning the Data

So in order for me to use the data, I need to change variable formats so that I can use packages such as Pandas, Numpy, Matlib and Seaborn to do some data exploration and visualization. This is what I tried to do with the output from Scrapy:

  • Change columns to appropriate data types (ie. strftime, pandas functions, etc)
  • Strip whitespace from DealTitle
  • Remove Nonsensical Rows (ie. Stickied Posts, Rules, "Delete" etc)
  • Remove unwanted substrings (ie. '$' and ',' in Deal Price)
  • Remove duplicates

However, there were several problems I could not resolve, and thus lead to less variables used in the analysis at the end:

  • The DealPrice included a lot of non-numerical entries. In addition to a problem with removing '$' and ',' characters from the observations with numerical values, there were many entries that listed text such as "Buy One Get One Free" or "50% Off" instead of a nominal price. I decided to remove the Deal Price because of this.
  • On some posts, there was additional information on the user who posted the deal. However, I was not able to figure out when these were displayed, so I ended up excluding the user reputation and deals posted columns that I scraped.


【已悬赏】 赏金: 2元

使用Python的包进行可视化和分析

Pandas和Matplotlib结合使用,我能够得到以下结果:

ViewCount和DealScore的值都呈现右偏,这意味着少数岗位产生最大数目的观点和交易分数。这可能是由于良好交易稀少,以及社区发布了大量边缘或不吸引人的一些交易。


下面是关于哪些类别和商店获得大量观点和交易分数的一些调查结果:

图表



Deal Score


cyt5969858
翻译于 45天前
 

参与本段翻译用户:
cyt5969858

显示原文内容

GMT+8, 2018-1-21 08:34 , Processed in 0.028830 second(s), 11 queries .