일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
- 딥러닝
- learning rate
- Spark 설치
- Clodera Quick Start
- 학습률
- Managed Table
- Tez
- Auto Encoder
- L2 정규화
- 차원의 저주
- 스파크
- hive setting
- 선형 회귀
- Hadoop Ecosystems
- Deeplearning 키워드
- hive
- .hiverc
- Haddop
- data load
- One-hot encoding
- DeepLearning
- Feature extraction
- Spark Data 불러오기
- Hive CLI
- 하이브
- Data 불러오기
- Manifold Learning
- Spark
- 캐글
- Over fitting
- Today
- Total
목록스파크 (2)
남는건 기록뿐

Check SCHEMA data.printSchema Check COLUMNS data.columns SELECT val pracData = data.select($"fecha_dato", $"ncodpers", $"ind_empleado") //change schema, name of columns pracData.select($"fecha_dato".cast("timestamp") as "Date", $"ncodpers".cast("string") as "CutomerNum", $"ind_empleado".cast("string") as "employed") // change only a name of a column pracData.withColumnRenamed("Date", "DateTime")..

Download binary cd /path/to/spark/download wget http://apache.mirror.cdnetworks.com/spark/spark-3.0.0-preview2/spark-3.0.0-preview2-bin-hadoop3.2.tgz. tar zxvf spark-3.0.0-preview2-bin-hadoop3.2.tgz ln -s spark-3.0.0-preview2-bin-hadoop3.2 default Setting Environment Variables export SPARK_HOME="/path/to/spark/download/default" export SPARK_CONF_DIR="${SPARK_HOME}/conf" export PATH="${PATH}:${SP..