Revision 526087f9ebca90f77f78d699c5f8d0243dd8ab61 authored by Marcelo Vanzin on 21 August 2017, 22:09:02 UTC, committed by gatorsmile on 21 August 2017, 22:09:12 UTC
For Hive tables, the current "replace the schema" code is the correct
path, except that an exception in that path should result in an error, and
not in retrying in a different way.

For data source tables, Spark may generate a non-compatible Hive table;
but for that to work with Hive 2.1, the detection of data source tables needs
to be fixed in the Hive client, to also consider the raw tables used by code
such as `alterTableSchema`.

Tested with existing and added unit tests (plus internal tests with a 2.1 metastore).

Author: Marcelo Vanzin <vanzin@cloudera.com>

Closes #18849 from vanzin/SPARK-21617.

(cherry picked from commit 84b5b16ea6c9816c70f7471a50eb5e4acb7fb1a1)
Signed-off-by: gatorsmile <gatorsmile@gmail.com>
1 parent 0f640e9
History
File Mode Size
src
pom.xml -rw-r--r-- 4.6 KB

back to top