Frequently Asked Questions

1. Is this DRSBot?

It is not. is a project that scrapes reddit for images, and is non-interactive.

You can, however, view the Calculator with DRSBot results on by changing Dataset to DRSBot, or by visiting

2. What is Trimmed Average?

Trimmed average is my solution to the "How representative is reddit to all ComputerShare account holders?" problem. After Gamestop released the first DRS Actuals, I discovered my results were off by a few million. At the time, the sample size was just under 10%.

In statistics, when there is uncertainty about the sample set not being representative of the whole, it's common to trim results from the top and bottom of the dataset. After trying a few different values, I found that dropping largest 5% of accounts, and smallest 5% of accounts brought the estimate very close to the actual number.
In July of 2022, the trimmed average was reduced to 4% linearly over 30 days.

2.5. What is the 180-day window?

It really just means that if I haven't seen an image from a given user in the last 6 months, I no longer include their portfolio in the Sample Set. While this does reduce the Sample Size drastically (roughly 10% -> 6%), it does increase the accuracy of the Sample Set as records older than 6 months are likely stale.

You can view the current sample size by clicking the button next to the METRIC selector on the Calculator.

3. How do you predict how many shares are Direct-Registered? How accurate is the prediction?

Honestly, you're looking at my best guess. My best guess is based on the simple formula:

T = N x A
where T is Total DRS, N is number of Computershare accounts, and A is the Average account balance of the Sample Set.

The estimate has been fairly accurate over time, usually underestimating. The estimate based on the Trimmed Average is fitted every quarter so that it aligns with the actual progress released by Gamestop.

4. I don't see my post.

This can happen for a variety of reasons. You might be surprised at how many people will post then delete before I have a chance to scrape it.

Also, my scraper doesn't always do the best job. If I had to guess, it misses probably 15% of posts it should capture. This can be because the image's resolution is too high, has moire patterns, or it isn't an image post at all.

If you were missed out and would like to be included, just shoot me a DM with a link to your post.

5. I updated my existing post using the comments section in r/GMEOrphans but it didn't update on

While the scraper does scrape the r/GMEOrphans subreddit for images, it does not look at the comments section.

Every post is scraped only once (no more than 15m after creation). If you update a post after it's been scraped, will never know. The only way to update a record for my scraper is to make a new image-post on one of the scraped subreddits.

5.5. r/GMEOrphans does not allow me to make a second post.

That's regrettable, but I am not a moderator there. Updating an existing post/image will not update your record with the reddit scraper.

6. When/How often does the site update?

For the scraper, nothing new goes out to the site until I review/audit. I do this every evening. A UTC day for me ends at 6pm and it takes about an hour to review everything and another hour for my server to compile the day's data.

Sometimes I just don't feel like doing it in the evening and it won't be until the middle of the next day or so. There is an "As Of" at the bottom of the Calculator that will update when new results are published.

7. Do you check for fakes/duplicates?

The scraper does keep a database of image hashes, and this is useful in identifying reused images or images posted twice (usually by the same author on multiple subreddits). Duplicate images/posts are accounted for by does not check for faked posts, but the Reddit community is very good at spotting faked images. does not count posts identified by the community as fake.

8. Why does the site show the wrong value for my post?

There are a couple of reasons this can happen:
  • You have multiple Computershare accounts and the scraper incorrectly guessed how many accounts you have or whether you have merged your accounts into one. The logic for account compliation always makes conservative guesses in these situations.
  • You posted an image without a visible share count, but a visible dollar amount and guessed how many shares you have based on the day's closing price.
  • Sometimes I make mistakes when auditing the results of computervision. These can be fixed easily enough. Contact me.
  • Your post was identified as a fake by the community.

9. Are you coordinating with DRSBot? Are you using DRSBot data too?

No. We've looked at collaborating/combining data before, but frankly our methodologies are too dissimilar and the resulting effort to do so can't be justified.

This doesn't bother me, though. We think of it as independent verification. To do the same thing two dissimilar ways and arrive at the same result is more valuable than combining efforts. Doubly so when Gamestop began releasing DRS Actuals and we weren't that far off.

10. Tell me about the setup.

The solution is cloud-hybrid. I do scraping/data processing onsite because it's significantly cheaper than running in the cloud. The backend is all python with a smattering of pandas and C#. To extract text from images, I use pytesseract. The front end is all vanilla javascript, html, and CSS. I use Bootstrap 5 for styling, and Chart.js for charts.

I keep a server rack that hosts a HPE Proliant DL360 G9 which is dedicated to this project. The system has a single Xeon E5-2680 v4 CPU, 16GB DDR4 ECC, a 500GB u.2 NVME SSD, and a local Raid1 for the OS. The project files and databases are backed up nightly, locally to a FreeNAS server running on a Proliant G5 with a separate rackmount SAS enclosure, and offsite to AWS S3.

Cloud hosting is all AWS. APIs are API Gateway and DynamoDB, or API Gateway, Lambda, and S3. Front end assets are in S3, and CloudFront as the CDN for geo-caching.

11. Can I have access to the data?

Yes. If you just want the raw CSV files that I use for statistics and metrics, download them with the following URL:
Months and Days are 0-padded.

The following API data is also available:

Base URL:
Path Description Parameters Parameter Definition Required
posts Reddit Posts
  • startTime
  • endtime
  • limit
  • sub
  • resumeUser
  • resumeId
  • epoch timeframe start
  • unimplemented
  • pagination limit
  • case-sensitive sub name
  • paginated resume username
  • paginated resume post id
  • no
  • no
  • no
  • no
  • no
  • no
posts/{username} Reddit User Posts
  • resumeId
  • paginated resume post id
  • no
  • no
dashboard Share Allocations
dashboard/stats Sample Set Statistics
  • bot
  • one of drsbot or scraper(default)
  • no
dashboard/highscores Computershare Account Numbers
dashboard/chart Chart Data
  • data
one of:
  • estimates
  • shares
  • stats
  • posts
  • growth
  • power
  • distribution
  • yes

If an API returns LastEvaluatedKey, the results have been paginated. Pull the reddit username and post id (u and id respectively) and pass it into the next request as resumeUser and resumeId.

This site is not affiliated with Computershare or GameStop.