Hacking your SRP time-of-use bill with a DIY Tesla Powerwall made from recycled scooter batteries

I’ve always been fascinated by technology. I love building all types of electronic projects and automating things. Recently I ran across a guy who discussed building a battery backup using recycled scooter batteries that contained 18650 rechargeable batteries. The same ones that you find in Tesla cars and Powerwalls.

I started thinking. I wonder if it would be possible to trickle charge a battery during low-cost energy hours (6PM-3PM) and then dump it back into the grid when the electricity was expensive(3PM-6PM). Or at least offset your peak hour usage with battery power. So I created some formulas and crunched some numbers. Before you read any further I want to say this is a complete hypothetical experiment. I would never connect this to my SRP system and risk them terminating my services.

Here is a list of SRP’s different electricity rates. You can see the greatest offset is in the months of May, June, July, August, September, and October. The variance in winter is not that great and you really can’t save that much using this method.

Ok, let’s compile a spreadsheet using these formulas.

ROI

Analyzing the ROI this setup using a simple 3kWh battery and a 1kW inverter we can yield 18.27% with a completely paid for system in 5.47 years. Not a bad return for someone with a little bit of money to invest.

Parts

So I started buying parts.

6 – Scooter batteries

1 – Inverter

1 – Charge/Discharge monitor

1 – Smart switch

1 – Battery Charger

1 – AC auto transfer switch

1 – Distribution block

1 – Arduino Nano

1 – Buck converter

Part 2

I’ll be assembling and testing over the next week. To be continued…

How to invest like a hedge fund manager with almost a 1,400% return in the last 14 years

A popular strategy that I have followed in my investments is crowdsourcing hedge fund manager investments. Many people don’t know this but any hedge fund that manages over $100 million dollars must report their stock holdings on a form called a 13f. This is filed with the SEC. There is a website that conveniently maps all of these filings so you can scan through your favorite investment manager and see what stocks they are buying and selling.

For instance, if you’re a fan of Warren Buffett and Charlie Munger you can take a look at Berkshire Hathaway’s holdings – https://whalewisdom.com/filer/berkshire-hathaway-inc Here is a list of their holdings which are over 1% of their portfolio

Or may be you’re a fan of Ray Dalio and you want to see what Bridgewater is holding? Here is a list of all their holdings that represent more than 1% of their portfolio.

Let’s take a look at Jim Simon’s fund Renaissance Technologies.

You get the point. You are able to get some pretty good insight as to what massive hedge funds are buying.

WhaleIndex

What I like about Whale Wisdom is they categorize the most successful hedge fund managers using what they refer to as a WhaleIndex.They then put together a list of 30 stocks based on successful fund managers. Some of their requirements are as follows.

  • Between 5 and 750 holdings in their 13F filing
  • At least 3 consecutive years of quarterly 13F filings
  • Hold no fewer than five stocks in its portfolio
  • Manage more than $100 million in marketable securities
  • Hold at least 20% of its portfolio in its top 20 stocks
  • Managers considered to be a bank, trust, pension, or insurance company are excluded

The top 40 managers who have maintained an average WhaleScore over the past 5-years higher than the five-year average WhaleScore of the S&P 500 is used in the WhaleIndex. Based on the holdings disclosed on their SEC filings, WhaleWisdom identifies the 100 stocks most commonly held among the respective managers’ 13F holdings.

The Whale Fund 2.0 is the one I follow. Since 2006 this strategy has yielded 1,345% – https://whalewisdom.com/whaleindex/portfolio_2_0

A few words of caution

13f filings come out 45 days after the quarter ends. This means the data is somewhat stale. It should also be noted that the fund could have purchased that stock at any time during the quarter. Meaning the data could be as old as 135 days. Secondly, funds are not required to report short positions or hedged positions. So you should not assume you know exactly what their portfolio consists of.

Broker

You’ll want to find a broker that allows for fractional investing if you don’t have enough money to buy full shares. Here are a couple for reference.

Firstrade – 4 free stocks with $100 deposit

Robinhood – Sign up link your bank account and get a free stock.

In Conclusion

Despite its downfalls, Whale Wisdom’s returns are still solid. While I wouldn’t recommend this strategy for your entire portfolio this is a good strategy to deploy a fixed percentage of your portfolio. There is also a book that written about this concept which you can find here – https://www.amazon.com/Invest-With-The-House-Hacking-ebook/dp/B01A3L1VEO and also an ETF that was created named VIP ETF

Getting St. Louis FRED Data in Google Colab for Python Analysis

At times when creating trading strategies using big data you need access to historical economic data. One of the best sources of data is the Economic Research branch of the St. Louis Federal Reserve or FRED. Today I’m going to show you how to pull that data into a dataframe so that you can analyze it using machine learning or AI.

The first step is to import Pandas datareader. What this piece of code does is it downloads all the data for Corporate AAA bond yields. Every dataset in FRED has a symobl. In this case it’s DAAA.

import pandas_datareader.data as web
import datetime

today = pd.to_datetime("today")
start = datetime.datetime(1900, 1, 1)
end = today

df_Corp_AAA_yield = web.DataReader(['DAAA'], 'fred', start, end)

# not working - Corp_AAA_yield = web.DataReader(['DAAA'], 'fred', start, end)
df_Corp_AAA_yield.head()
df_Corp_AAA_yield.tail()

We can now visualize our dataframe by plotting it.

df_Corp_AAA_yield.plot(grid=True)

Normalizing Stock data for Machine Learning

When analyzing historical time frame data in machine learning it needs to be normalized. In this code example, I will show how to get S&P data then convert it to a percent of daily increase/decrease as well as a logarithmic daily increase/decrease.

The first part of this code will use yfinance as our datasource.

#we're going to use yfinance as our data source
!pip install yfinance --upgrade --no-cache-dir

import pandas as pd
import numpy as np
import yfinance as yf

Next, we’re going to create a dataframe called df and download SPY data from 2000 to it.

#Here we're creating a dataframe for spy data from 2000-current
df = yf.download('spy',
  start='2000-01-01',
  end='2020-08-21',
  progress=True)
  #dauto_adjust=True,
  #actions='inline',) #adjust for stock splits and dividends
#print the dataframe to see what lives in it
print(df)

Finally, we’ll print the result of df so you can get an idea of what is inside of it.

We’re going to drop all the columns except Adj Close. Then we’ll rename it adj_close. Next, we’ll create a column labeled simple_rtn. This is the daily simple return or percent increase/decrease. The next line of code creates a logarithmic increase/decrease. Logarithmic gives equal bearing to the Y-axis and can be defined as follows, “A logarithmic price scale uses the percentage of change to plot data points, so, the scale prices are not positioned equidistantly. A linear price scale uses an equal value between price scales providing an equal distance between values.”

#only keep adj close
df = df.loc[:, ['Adj Close']]
df.rename(columns={'Adj Close':'adj_close'}, inplace=True)
#create column simple return
df['simple_rtn'] = df.adj_close.pct_change()
#create column logrithmic returns
df['log_rtn'] = np.log(df.adj_close/df.adj_close.shift(1))
print(df)

This next command just analyzes the data so you can spot-check what you’ve created.

#here we can analyze our data
df.info()

This next section describes what the daily increase/decrease of the SPY looks like. You can see statistically relevant information about S&P here.

#get statistical data on the data frame
df.describe()

Next, we can see a distribution of adjustable close, logarithmic return, and simple return.

#view chart of data to get an overview of what lives in the data
import matplotlib.pyplot as plt
df.hist(bins=50, figsize=(20,15))
plt.show()

This is all for data normalization. You can now apply different algorithmic analyses to the data.