Jeff Ray is a meteorologist for CBS Dallas- Fortworth

Jeff Ray is a meteorologist for CBS Dallas- Fortworth.
When he’s not reporting about the weather, he is probably spending time with his family and staying devoted to a healthy lifestyle.
Jeff spends time biking, lifting weights and maintaining a healthy diet.
Keep reading our exclusive interview to find out how he stays fit, what inspires him.

And his easy health tips… The great @JeffJamFTW is leaving TV tonight

the 10p show is his last broadcast.

As difficult as it for a Longhorn to say this of an Aggie

there is no one I know of higher character.

One of the best METS in the marketplace

One of my best friends.
I will miss him so.— Jeffrey Ray (@cbs11jeffrey) September 29

2019 HFR: What is your daily exercise and nutrition routine?  Jeff: Take what the day gives you is my mantra.
I’m always pulled off the path: two very active high school boys, a dual-income household, a demanding job with an inconsistent schedule.
My wife and I close the night going over the next day’s schedule, .

The plan ALWAYS includes where our exercise fits in

We can usually get our work out in the early morning before the school routine starts.
Currently, we are working on a 90-day weightlifting program that focuses on a different muscle group each day.
This program covers a 6-day lift week.
My rest day is my long ride on my bike (30-mile minimum) or a big project in the garden.
I bike ride about 20-25 miles two other days a week, usually with my 16-year old who has also started doing endurance events (last August was his 2nd Hotter n’ Hell ride).

My wife is a Chiropractor and nutritionist

she closely monitors our supplements.

She uses mostly Standard Process and evaluates us every 60-90 days

Everyone in our family takes the basics: fish oil, vitamin D, B but also very specific supplements to keep inflammation and mental fatigue at bay.
We keep our diet rather simple; kind of a mashup of Mediterranean and Paleo.
We prepare near all of our meals at home.
The diet that seems to work for us includes four small meals a day (one of them a protein shake).
I keep my calorie intake under 2300 calories a day and drink about 32 ounces of water (usually spiked with lemon juice).
I grow greens and some fruits in my backyard, they take a large role in our diet since I know the source.
I drink a cup of black coffee in the morning and when at work, sip unsweetened hot green tea across the afternoon.
My wife and I have both taken a blood oath to avoid fast food, soda, high sugar foods, .

And processed meats: all that stuff that they percolate American culture

HFR: What keeps you motivated to stay healthy?  Jeff: My job demands higher than normal standards of mental and physical maintenance.
Pride comes into play, .

Working in the public eye on TV exposes you to a judgmental mob

I work in a business that prefers youth.
I find the desire for middle-class life with health insurance ample enough motivation to hold on to it.
Good diet and steady exercise also keep my mood elevated and my thinking crisp; the cornerstone of being a good worker.
Two sons who became fathers and had sons and daughters of their own.
Happy Father’s Day to all that applies to.— Jeffrey Ray (@cbs11jeffrey) June 16

2019 HFR: Do you believe that being fit and healthy has contributed to your successful career?  Jeff: To be honest, it is almost an unspoken rule for on-air folks.
HFR: What inspires you.

In general?  Jeff: I like the quote from George Bush Senior

“Stay as young as you can as long as you can”.
Having two teenage boys is a daily inspiration to keep up the pace.
Being around the younger inspires you not to think like a senior.
Yes, my hair has turned gray (about ten years ago), but the second you act the role society wants to give you, you are doomed.
HFR: What tips would you give your fans and our readers to staying healthy?  Jeff: I’m not so vain to believe I can affect change on anyone.
I will tell you that a bad diet is the source of great unhappiness.
Walk around in America and you can see for yourself how overeating it this country’s great thief of joy.
The majority is overweight.
As Mark Twain said, whenever you find yourself on the side of the majority, it is time to pause and reflect.
HFR: Share something that most people don’t know about you.
Jeff: My ADHD and dyslexia were so bad (and undiagnosed: it was in the ‘60s after all) I honestly have no idea how I got through school.
Both conditions faded by the time I was in my late 20s.
I did all my learning by reading as an adult, I still read about 1-2 hoursevery day.When George Bush Senior (quoted above) left office I left the Party.
I’m in my 60s: all I want to do these days is grow things, make things and love my family.
So I garden, work in my woodshop and go places with my boys that we haven’t seen before.
I’ve never broken a bone or had a significant surgery.
I’ve never taken any medication other than a short period of antibiotics.
I even avoid pain relievers, I believe they cloud your thinking.
I have no desire to retire…ever.
I’ll probably end my career working a drive-thru.

A story about Vermont’s only permanent

A story about Vermont’s only permanent, supervised housing for people with serious mental illness.
Featuring: Anne Donohue, state representative from Northfield and Berlin, editor of Counterpoint Graham Parker, MyPad director Connie Stabler, mother and Howard Center board member This show is part of a seven-part series I produced for Vermont Public Radio called They Are Us, which features personal stories from inside the state’s mental healthcare system.

Comments: Please make a comment or share a story if you’ve got one

Comments and conversation are part of the point.

Credits: Series Advisor: Dillon Burns

mental health services director at Vermont Care Partners Series Associate Producers: Clare Dolan, Mark Davis Series Executive Director: Sarah Ashworth VPR Advisors: Franny Bastian and John Dillon Mixing: Chris Albertine Digital Producer: Meg Malone Series Logo: Aaron Shrewsbury Music for this series is by two excellent Montreal-based bands: Godspeed You.
Black Emporer and Esmerine.

Special thanks to the awesome Bruce Cawdron For more information about the series

visit VPR.
You’ll find the series schedule and resources.
Very big thanks to the following people for their knowledge, time and advice: M.T.
Anderson, Melissa Bailey, Gretchen Brown, Seleem Choudhury, Anne Clement, Jimmy Dennison, Isabelle Desjardins, Laurie Emerson, Deb Fleischman, Laura Flint, Al Gobeille, Alix Goldschmidt, Gary Gordon, Keith Grier, Heather Houle, Jenniflower, Karen Kurrle, Lt.
Maurice Lamothe, Sabrina Leal, Fran Levine, Martie Majoros, Jack McCullough, Mark McGee, Megan McKeever, Betsy Morse, Bess O’Brien, Roxanne Pearson, Julie Potter and her beautiful daughter, Malaika Puffer, Michael Rousse, Marla Simpson, Montpelier Senior Activity Center, Sandy Steingard, Tony Stevens, Cindy Tabor, Gloria Vandenberg, Konstantin von Krusenstiern.

We Want New OGS Members

Half price MARCH MADNESS Membership Special – $40 single NEW membership (digital) in the Ohio Genealogical Society is just $20 through the end of March.
Now is the time to get your relatives and friends involved.
Why join a group.
Educational opportunities, comradery of friends, digital resources on our web site, two fantastic periodicals, a 60,000-volume research library, thousands of Facebook friends, a BIG jamboree in Columbus next month “Blazing New Trails” for you – this is why we need to join up.

Send $20 and the new member’s name/address to the Ohio Genealogical Society

611 State Route 97 W, Bellville OH 44813-8813 – The post We Want New OGS Members appeared first on Ohio Genealogical Society.

Both Snowflake and Qubole separate compute from storage

Snowflake and Qubole have partnered to bring a new level of integrated product capabilities that make it easier and faster to build and deploy machine learning (ML) and artificial intelligence (AI) models in Apache Spark using data stored in Snowflake and big data sources.
In this second blog of three we cover how to perform advanced data preparation with Apache Spark to create refined data sets and write the results to Snowflake, thereby enabling new analytic use cases.

The blog series covers the use cases directly served by the Qubole–Snowflake integration

The first blog discussed how to get started with ML in Apache Spark using data stored in Snowflake.
Blogs two and three cover how data engineers can use Qubole to read and write data in Snowflake, including advanced data preparation, such as data wrangling, data augmentation, .

And advanced ETL to refine existing Snowflake data sets

Making Advanced Data Preparation Easier Snowflake stores structured and semi-structured data, which allows analysts to create new views and materialized views using SQL-based transformations like filtering, joining, aggregation, etc.
However, there are cases where business users need to derive new data sets that require advanced data preparation techniques such as data augmentation, data wrangling, data meshing, data fusion, etc.
In these cases the data engineer—and not the analyst—is responsible for the task, and benefit from a cluster computing framework with in-memory primitives like Apache Spark that make the advanced data preparation process easier and faster.
In addition, data engineers need the flexibility to choose a programming language that best suits the task, such as object languages (Java), functional languages (Python or Scala) or statistical languages (R).
The Qubole–Snowflake integration allows data scientists and other users to: Leverage the scalability of the cloud.
Both Snowflake and Qubole separate compute from storage, allowing organizations to scale up or down computing resources for data processing as needed.
Qubole’s workload-aware autoscaling automatically determines the optimal size of the Apache Spark cluster based on the workload.

Securely store connection credentials. When Snowflake is added as a Qubole data  store

credentials are stored encrypted and they do not need to be exposed in plain text in Notebooks.
This gives users access to reliably secure collaboration.
Configure and start up Apache Spark clusters hassle-free. The Snowflake Connector is preloaded with Qubole Apache Spark clusters, .

Eliminating manual steps to bootstrap or load Snowflake JAR files into Apache Spark

The figure below describes the workflow for using Qubole Apache Spark for advanced data preparation with data stored in Snowflake.

The process starts by loading the data into Snowflake

Once that is complete, .

Data engineers need to make the Snowflake virtual  data warehouse visible to Qubole

Then data engineers can choose their preferred language to read data from Snowflake

perform advanced data preparation such as data augmentation, meshing, correlation, etc.
and derive business-focused datasets  that are then written  back to either Snowflake or any other application, including dashboards, mobile apps, among others.
Adding Snowflake as a Datastore in Qubole The first step is to connect to a Snowflake Cloud Data Warehouse from the Qubole Data Service (QDS).
We described the details of how to set up a Snowflake Data Store in the first blog of the series.
Reading Data from Snowflake into an Apache Spark Dataframe By adding Snowflake as a datastore in a QDS account, QDS will automatically include the Snowflake-Apache Spark Connector in each Apache Spark Cluster provisioned in this account.
Once the Snowflake virtual data warehouse is defined as a Data Store you can use Zeppelin or Jupyter Notebooks with your preferred language (Java, Scala, Python, .

Or R) to read and write data to Snowflake using QDS’s Dataframe API

Data engineers can also use the Dataframe API in QDS’ Analyze to accomplish this.

Below is a sample code in Sacala to read data from Snowflake using the QDS Dataframe API

val df = .option(“sfDatabase”,””) .snowflake(“”,””, “”) The screenshot below shows how to use the same code in the Analyze query composer interface.
Writing Data to Snowflake Once data is processed and a new data set is created.

You can write the data to Snowflake using the same Dataframe API interface

specifying in the last parameter the destination table name in Snowflake.
Below is a sample Scala code snippet to write data to Snowflake: Df.write   .option(“sfDatabase”,””)   .snowflake(“”,””, “”) Summary The integration between Qubole and Snowflake provides data engineers a secure and easy way to perform advanced preparation of data in Snowflake using Apache Spark on Qubole.
Data teams can leverage the scalability and performance of the Apache Spark cluster computing framework to perform sophisticated data preparation tasks in the language that best suits their need (Java, Python, Scala or R) very efficiently.

Qubole removes the manual steps needed to configure Apache Spark with Snowflake

making it more secure by storing encrypted user credentials, eliminating the need to expose them as plain text, and automatically managing and scaling clusters based on workloads.
For more information on Machine Learning in Qubole, visit the Qubole Blog.
To learn more about Snowflake, visit the Snowflake Blog.
Also, Reference Qubole-Snowflake Integration Guide Setup Snowflake Datastore Running Apache Spark Applications on QDS The post Qubole + Snowflake: Transforming Data with Apache Spark — [2 of 3] appeared first on Qubole.