求教这里哪错了,a,datatable length报错,为啥是灰色,下面time怎么错了呀

LoadRunner出现error问题及解决方法总结[转载] - 悠悠着然 - 51Testing软件测试网 51Testing软件测试网-中国软件测试人的精神家园
偶是测试新手,希望前辈们能多多指教。
LoadRunner出现error问题及解决方法总结[转载]
& 14:49:48
/ 个人分类:
&  一、Step download timeout (120 seconds)  这是一个经常会遇到的问题,解决得办法走以下步骤:  1、 修改run time setting中的请求超时时间,增加到600s,其中有三项的参数可以一次都修改了,HTTP-request connect timeout,HTTP-request receieve timeout,Step download timeout,分别建议修改为600、600、5000;run time setting设置完了后记住还需要在control组件的option的run time setting中设置相应的参数;  2、 办法一不能解决的情况下,解决办法如下:  设置runt time setting中的internet protocol-preferences中的advaced区域有一个winlnet replay instead of sockets选项,选项后再回放就成功了。切记此法只对windows系统起作用,此法来自zee的资料。&ction.c(34): Error -2 seconds) has expired when downloading resource(s). Set the "Resource Page Timeout is a Warning" Run-Time Setting to Yes/No to have this message as a warning/error, respectively& &[MsgId: MERR-27727]Action.c(34): web_link("****") highest severity level was "ERROR",
body bytes, 547 header bytes& &[MsgId: MMSG-26388]Ending action Action.解决方法:一、取消选中run time settings-browser emulation-download non-html resources.解决&。二、run-time settings-&preferences-&advanced-&options下设置Http-request connect timeout(sec) 把值120改为600,Http-request&recive timeout(sec) 把值120改为600,在结果分析的时候,在网页细分图中发现,搜索页面下会有一个登陆页面的aspx,初步分析是搜索超时造成返回登陆页面(页面有超时设置,多久没有登陆系统会返回登陆界面),按照上述方法解决。另附上一篇看到的:来自:作者:风~自由自在这两天测试并发修改采购收货时,录制回放正确,运行脚本,集合点3个并发时,却老是出错如下:Action.c(30): Error -26612: HTTP Status-Code=500 (Internal
Error) for解决过程:按Help提示在浏览器输入原地址,发现提示“请重新登陆系统”。被此误导,偶以为是Session ID、或Cookie失效,于是尝试找关联,花了N多时间。可是脚本里确实不存在需要关联的地方呀,系统默认关联了。与程序员沟通,证实此过程不会涉及到Session ID 或Cookie。那为什么?因为集合点下一站就是修改的提交操作,于是查找web_submit_data--&定位查找Log文档注意点:怎么找log文件--&Controller--&Results--&Results Settings 查找本次log文件保存目录--&到该目录下查找log文件夹--&打开惊喜的发现其中竟然有所有Vuser 的运行log。--&打开Error 查找报错的Vuser--&打开相应的log文件查找error,然后偶发现了一段让偶热泪盈眶的话:Action.c(30):&&&& &p&Microsoft OLE DB Provider for ODBC Drivers&/font& &font face="宋体" size=2&错误 '800040Action.c(30):&&&& 05'&/font&\nAction.c(30):&&&& &p&\nAction.c(30):&&&& &font face="宋体" size=2&[Microsoft][ODBCServer Driver][SQL Server]事务(进程 ID& 53)Action.c(30):&&&&与另一个进程已被死锁在& lock 资源上,且该事务已被选作死锁牺牲品。请重新运行该事务。&/font&Action.c(30):&&&& \nAction.c(30):&&&& &p&\nAction.c(30):&&&& &font face="宋体" size=2&/Purchase/stockin_action.asp&/font&&font face="宋体" size=2&,行Action.c(30):&&&& 205&/font&Action.c(30): Error -26612: HTTP Status-Code=500 (Internal Server Error) for "http://192.168.100.88:88/Purchase/stockin_action.asp?Oper=Edt"& &[MsgId: MERR-26612]Action.c(30): t=37758ms: Closing connection to 192.168.100.88 after receiving status code 500& &[MsgId: MMSG-26000]Action.c(30): t=37758ms: Closed connection to 192.168.100.88:88 after completing 43 requests& &[MsgId: MMSG-26000]Action.c(30): t=37760ms: Request done "http://192.168.100.88:88/Purchase/stockin_action.asp?Oper=Edt"& &[MsgId: MMSG-26000]Action.c(30): web_submit_data("stockin_action.asp") highest severity level was "ERROR", 1050 body bytes, 196 header bytes& &[MsgId: MMSG-26388]Ending action Action.&[MsgId: MMSG-15918]Ending iteration 1.&[MsgId: MMSG-15965]Ending Vuser...&[MsgId: MMSG-15966]Starting action vuser_end.&[MsgId: MMSG-15919]解决了。。。。。。。很寒。由此可以看出,查看日志文件是件多么重要的事情啊!!!!!其实并发死锁本来就是本次的重点,之前是写事务,但没有做整个页面的锁定,只是写在SQL里。程序员说这样容易出现页面错误,又改成页面锁定,具体怎么锁偶没看懂asp外行。之前事务冲突,偶让他写个标志,定义个数值字段增一,偶就可以直观看出来了。这次改成页面就删掉这些标志了,于是出错就无处可寻。这次最大的收获就是知道怎么查找Controller的log文件。以后看到Error就不会被牵着鼻子走了~~~~&-------------------------------------血的教训~~再次碰到26612错误,此次偶没认真查看log,又重蹈覆辙。找了N久,还是没发现问题所在。后来索性又打印出所有log。真理就显示出来了。。so偷懒不得~~  二、问题描述Connection reset by peer  这个问题不多遇见,一般是由于下载的速度慢,导致超时,所以,需要调整一下超时时间。  解决办法:Run-time setting窗口中的‘Internet Protocol’-‘Preferences’设置set advanced options(设置高级选项),重新设置一下“HTTP-request connect timeout(sec),可以稍微设大一些”;  三、问题描述connection refused  这个的错误的原因比较复杂,也可能很简单也可能需要查看好几个地方,解决起来不同的方式也不同;  1、 首先检查是不是连接weblogic服务过大部分被拒绝,需要监控weblogic的连接等待情况,此时需要增加acceptBacklog,每次增加 25%来提高看是否解决,同时还需要增加连接池和调整执行线程数,(连接池数*Statement Cache Size)的值应该小于等于oracle数据库连接数最大值;  2、 如果方法一操作后没有变化,此时需要去查看服务器操作系统中是否对连接数做了限制,AIX下可以直接vi文件limits修改其中的连接限制数,还有 tcp连接等待时间间隔大小,wiodows类似,只不过wendows修改注册表,具体修改方法查手册,注册表中有TcpDelayTime项;  四、问题描述open many files  问题一般都在压力较大的时候出现,由于服务器或者应用中间件本身对于打开的文件数有最大值限制造成,解决办法:  1、 修改操作系统的文件数限制,aix下面修改limits下的nofiles限制条件,增大或者设置为没有限制,尽量对涉及到的服务器都作修改;  2、 方法一解决不了情况下再去查看应用服务器weblogic的commonEnv.sh文件,修改其中的nofiles文件max-nofiles数增大,应该就可以通过了,具体就是查找到nofiles方法,修改其中else条件的执行体,把文件打开数调大;修改前记住备份此文件,防止修改出错;  五、问题描述has shut down the connection prematurely  一般是在访问应用服务器时出现,大用户量和小用户量均会出现;  来自网上的解释:  1& 应用访问死掉  小用户时:程序上的问题。程序上存在的问题  2& 应用服务没有死  应用服务参数设置问题  例如:  在许多客户端连接Weblogic应用服务器被拒绝,而在服务器端没有错误显示,则有可能是Weblogic中的server元素的AcceptBacklog属性值设得过低。如果连接时收到connection refused消息,说明应提高该值,每次增加25%  连接池的大小设置,或JVM的设置等  3& 数据库的连接  在应用服务的性能参数可能太小了  数据库启动的最大连接数(跟硬件的内存有关)  以上信息有一定的参考价值,实际情况可以参考此类调试。  如果是以上所说的小用户时:程序上的问题。程序上存在数据库的问题,那就必须采用更加专业的工具来抓取出现问题的程序,主要是程序中执行效率很低的sql语句,weblogic可以采用introscope定位,期间可以注意观察一下jvm的垃圾回收情况看是否正常,我在实践中并发500用户和600用户时曾出现过jvm锯齿型的变化,上升下降都很快,这应该是不太正常的;  六、问题描述Failed to connect to server  这个问题一般是客户端链接到服务失败,原因有两个客户端连接限制(也就是压力负载机器),一个网络延迟严重,解决办法:  1、 修改负载机器的tcpdelaytime注册表键值,改小;  2、 检查网络延迟情况,看问题出在什么环节;  建议为了减少这种情况,办法一最好前就完成了,保证干净的网络环境,每个负载机器的压力测试用户数不易过大,尽量平均每台负载器的用户数,这样以上问题出现的概率就很小了。ErrorFailed to connect to server192.168.2.192[10060]ConnectionErrortimed out ErrorServer192.168.2.192has shut down the connection prematurely原因:在许多客户端连接Weblogic应用服务器被拒绝,而在服务器端没有错误显示,则有可能是Weblogic中的server元素的AcceptBacklog属性值设得过低。如果连接时收到connection refused消息,说明应提高该值,每次增加25%Java连接池的大小设置,或JVM的设置等ErrorPage download timeout120secondshas expired&本帖子已过去太久远了,不再提供回复功能。温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!&&|&&
LOFTER精选
网易考拉推荐
用微信&&“扫一扫”
将文章分享到朋友圈。
用易信&&“扫一扫”
将文章分享到朋友圈。
maven 编译spark的时候[ERROR] Failed to execute goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on project spark-parent: Execution default of goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed: A required class was missing while executing org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process: Lorg/sonatype/plexus/build/incremental/BuildCorg.apache.maven.plugins:maven-remote-resources-plugin:1.5:process: Lorg/sonatype/plexus/build/incremental/BuildContextNoClassDefFoundError: Lorg/sonatype/plexus/build/incremental/BuildContext[ERROR] Failed to execute goal on project spark-core_2.10: Could not resolve dependencies for project org.apache.spark:spark-core_2.10:jar:1.0.0: Could not transfer artifact org.tachyonproject:tachyon:jar:0.4.1-thrift from/to maven-repo (http://repo.maven.apache.org/maven2): GET request of: org/tachyonproject/tachyon/0.4.1-thrift/tachyon-0.4.1-thrift.jar from maven-repo failed: Premature end of Content-Length delimited message body (expected: 2073191; received: 1220194 -& [Help 1]================描述failed 2 times due to AM Container for appattempt_6_ exited with &exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:&org.apache.hadoop.util.Shell$ExitCodeException:&& & & & at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)/questions//hadoop-2-2-word-count-example-failing-on-windows-7/spring-projects/spring-hadoop-samples/issues/4使用内置的examples 出现错误,自己写的例子可以通过,详见http://fyzjhh./blog/static//================描述shark 执行的时候,抱spark master not response 或者looks down&Initial job has not ac check your cluster UI to ensure that workers are registered and have sufficient memoryInitial job has not ac check your cluster UI to ensure that workers are registered and have sufficient memory一个可能的原因是 core不够了。 在shark启动的时候,添加 -Dspark.cores.max=xx 来限制core的使用查看spark的日志,如下:-3184044, local class serialVersionUID = 071411&org.apache.spark.deploy.ApplicationD local class incompatible: stream classdesc serialVersionUID = -3184044, local class serialVersionUID = -6333303& & & & at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:592)
shark 中使用的 spark包的版本 和 spark集群的版本不一致。这里的spark是1.0.0 的 , 但是shark中的spark包是 0.9.0 的。所以spark集群换成了0.9.0 的版本, 就好了================描述the NTP socket is in use, exiting可能有ntpd 进程在运行,可以手动关闭no servers can be used, exiting到ntp服务器的网络不通,一般是dns或者网关有问题================描述type mistch , required Null解决方法变量没有加类型限制================描述patterns after a variable pattern cannot match (SLS 8.1.1) If you intended to match against value ft_result in class tool, you must use backticks, like: case `ft_result` =&解决方法换成if 代替 match================描述mysql@YUNWEI-HADOOP-MONGODB:-0-$ mongo --port 27018 -u root -pttxsdb@10&MongoDB shell version: 2.4.3connecting to: 127.0.0.1:27018/testFri Jun 13 09:35:56.228 Socket recv() errno:104 Connection reset by peer 127.0.0.1:27018Fri Jun 13 09:35:56.228 SocketException: remote: 127.0.0.1:27018 error: 9001 socket exception [1] server [127.0.0.1:27018]&Fri Jun 13 09:35:56.228 DBClientCursor::init call() failedFri Jun 13 09:35:56.229 JavaScript execution failed: Error: DBClientBase::findN: transport error: 127.0.0.1:27018 ns: admin.$cmd query: { whatsmyuri: 1 } at src/mongo/shell/mongo.js:L114exception: connect failed解决方法mongodb 连接数达到上限================描述debug1: Authentications that can continue: publickeydebug1: Next authentication method: publickeydebug1: Trying private key: /home/admin/.ssh/identitydebug1: Offering public key: /home/admin/.ssh/id_rsadebug1: Authentications that can continue: publickeydebug1: Trying private key: /home/admin/.ssh/id_dsadebug1: No more authentication methods to try.Permission denied (publickey).解决方法ssh 连接不能再切换账号的时候使用比如使用root登录, ssh可以。但是切换到 user1 的时候,就会报错================描述hive 执行时候Both left and right aliases encountered in JOIN 's1'解决方法两个表join的时候,不支持两个表的字段 非相等 操作。例如t2.dtlogtime&=t1.s1&select t2.iuin from test.tmp_iss_srcdata_get_abnormal_login t1 join maolu_10.tab_item_sell t2 on ( t1.i0 = t2.iuin and t2.par_datetime in ('201405') and t2.dtlogtime&=t1.s1 and t2.Ireason not in (1003) )可以将非相等条件提取到where中select t2.iuin from test.tmp_iss_srcdata_get_abnormal_login t1 join maolu_10.tab_item_sell t2 on ( t1.i0 = t2.iuin ) where &t2.par_datetime in ('201405') and t2.dtlogtime&=t1.s1 and t2.Ireason not in (1003)&================描述ORA-12514:TNS:listener does not currently know of service requested in connect descriptor。监听程序当前无法识别连接描述符中请求的服务。解决方法需要修改network/admin/listener.ora &为如下格式,添加前面的SID_LIST_LISTENER部分。lsnrctl 重启一下。SID_LIST_LISTENER=& (SID_LIST =& & (SID_DESC =& & & (GLOBAL_DBNAME = SID1)& & & (ORACLE_HOME =/data/oracle/oracle_11gR2)& & & (SID_NAME = SID1)& & )& & (SID_DESC =& & & (SID_NAME = CLRExtProc)& & & (ORACLE_HOME = /data/oracle/oracle_11gR2)& & & (PROGRAM = extproc)& & )& )LISTENER =& (DESCRIPTION_LIST =& & (DESCRIPTION =& & & (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))& & & (ADDRESS = (PROTOCOL = TCP)(HOST = os-1)(PORT = 1521))& & )& )ADR_BASE_LISTENER = /data/oracle================描述ORA-27101: shared memory realm does not exist解决方法服务器端&create spfile from pfile='/data/oracle/admin/sid1/pfile/init.ora.5' ;startup &;================描述java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$SetOwnerRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSjava.lang.VerifyError: class org.apache.hadoop.security.proto.SecurityProtos$GetDelegationTokenRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldS &解决方法&./lib_managed/jars/edu.berkeley.cs.shark/hive-exec/hive-exec-0.11.0-shark-0.9.1.jar将它解压,删除里面的 &com/google/protobuf 中的所有class文件重新打包,覆盖掉 原来的jar包================描述linux join命令抛出file 1 is not in sorted order解决方法可能的一个原因是两个文件的换行符不一样。 有一个是\r\n . 可以使用cat x | tr -d '\r' & 解决================描述hive 执行&Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.unset(Ljava/lang/S)V解决方法hive的版本是0.13.0太高,hadoop的版本较低。 hadoop中没有对应的方法。降低hive的版本到0.11.0================描述hive 的sql报错&Expression not in GROUP BY key&原因sql语句含有groupby 但是意义不明确,比如没有聚合函数解决方法修改sql================描述hive 的sql报错Makefile.include.header:97: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=&directory& and run Make again原因yum install gcc kernel kernel-devel , 重启机器解决方法================描述Building the OpenGL support module原因&解决方法export MAKE='/usr/bin/gmake -i'./VBoxLinuxAdditions.run================描述hiveserver 执行的时候报错jdbc前端报错Query returned non-zero code: 2, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTaskhiveserver日志org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row任务的日志UDFArgumentException: The UDF implementation class 'com.udf.Converter_long2str' is not present in the class path原因把udf的jar包放到hive的lib之后, hiveserver 并未加载udf类解决方法需要重新启动hiveserver ,重新加载jar包================描述WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableOpenJDK 64-Bit Server VM warning: You have loaded library /home/soulmachine/local/opt/hadoop-2.2.0/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.It's highly recommended that you fix the library with 'execstack -c &libfile&', or link it with '-z noexecstack'.14/02/14 13:14:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicablelocalhost原因dns中找不到机器名另外一种原因是 机器是64位的,但是native库是32位的。需要在64位机器上重新编译native库。解决方法配置文件指定一个 ServerName localhost 即可另外一种解决方法我们从官方下载的Hadoop发布包中native library的版本都是32位的,如果要支持64位,必须自己重新编译(搞不懂官网为啥默认是32位,目前基本上我们的OS都是运行在64位,更不用说生产环境)。其实YARN的官方文档中有说明:”The pre-built 32-bit i386-Linux native hadoop library is available as part of the hadoop distribution and is located in the lib/native directory. You can download the hadoop distribution from Hadoop Common Releases.”详细参加官方文档:http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/NativeLibraries.html下面我们只需要 checkout 源码,然后在64位系统上重新编译生成相应的lib库替换即可:1#checkout 源码2svn co https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.2.0/3# 切换目录4cd release-2.2.05#编译6mvn clean package -Pdist,native -DskipTests -Dtar================描述httpd: Could not reliably determine the server's fully qualified domain name, using 192.168.12.210 for ServerName原因dns中找不到机器名解决方法配置文件指定一个 ServerName localhost 即可================描述java.io.IOException: File &filename& could only be replicated to 0 nodes, instead of 1原因datanode没有启动成功解决方法重新启动datanode,或者namenode 重新format。================描述:jianghehui@YunWei-Jumper:~/softs$ mysql -h xxxx -P 3306 -uroot -pjianghehui@YunWei-Jumper:~/softs$ mysql -hjianghehui@YunWei-Jumper:~/softs$ mysql -Vjianghehui@YunWei-Jumper:~/softs$ mysqlERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)jianghehui@YunWei-Jumper:~/softs$&原因:编译的时候,bin指定到绝对路径了。解决方法:使用绝对路径,或者加到path================描述:You don't have permission to access /index.html on this server原因:index.html是用root用户建的文件,apache权限不够解决方法:打开apache配置文件httpd.conf,找到这么一段:&Directory /&& & &Options FollowSymLinks& & &AllowOverride None& & &Order deny,allow& & &deny from all& & &Satisfy all&/Directory&然后试着把deny from all中的deny改成了allow,保存后重起了apache,然后再一测试我的网页就正常了.================描述:mysql reset slave 执行还有遗留信息原因:xxx解决方法:使用reset slave all================描述:No job jar file set. &User classes may not be found.See JobConf(Class) or JobConf#setJar(String).需要将class放到jar包中运行not a SequenceFile指定为seqfile,需要创建seqfilejob Submission failed with exception 'java.io.IOException(The ownership/permissions on the staging directory /tmp/hadoop-hadoop-user1/mapred/staging/hadoop-user1/.staging is not as expected. It is owned by hadoop-user1 and permissions are rwxrwxrwx. The directory must be owned by the submitter hadoop-user1 or by hadoop-user1 and permissions must be rwx------)hadoop fs -chmod -R 700 /tmp/hadoop-hadoop-user1/mapred/staging/hadoop-user1/.stagingPermission denied: user=xxj, access=WRITE, inode="user":hadoop:supergroup:rwxr-xr-x&&property&&name&dfs.permissions&/name&&value&false&/value&&/property&&writablename cannot load class自己写的writable对象不在classpath中Type mismatch in key from map: expected org.apache.hadoop.io.BytesWritable, recieved org.apache.hadoop.io.LongWritablekey的类型和指定的不匹配Cleaning up the staging area hdfs://192.168.12.200:9000/tmp/hadoop-root/mapred/staging/jianghehui/.staging/job__0004原因:hadoop解决方法:sql语句执行有问题,比如没有指定库名 直接使用表名字================描述:PHP startup: Unable to load dynamic library './php_mysql.dll 找不到指定的模块undefined function mysql_connect()原因:xxx解决方法:总结如下:extension_dir要设置正确。PHP的安装目录添加到%path%中还有 把所依赖的dll拷贝到%windir%\system32================描述device "eth0" does not seem to be present, delaying initialization原因虚拟机用模板做linux的时候因为网卡配置信息(主要是MAC)也复制过去,但是虚拟服务器会分配另外的一个mac地址,启用的时候会出错解决方法1.打开etc/sysconfig/network-scripts/ ficfg-eth0,确定ONBOOT应该为yes,2.检查ficfg-eth0的MAC和ifconfig现实的MAC是否相符,并修改ficfg-eth0的MAC。3.重启服务,service NetworkManager restart ,service network restart.4.然后系统会自动识别到网卡信息,就ok了。================描述Keepalived 测试不成功,查看 /var/log/messages&Keepalived_healthcheckers: IPVS: Can't initialize ipvs: Protocol not available原因是否lvs模块加载异常,于是lsmod|grep ip_vs发现果然没有相应的模块,而正常情况下应该是有的解决方法手动加载ip_vs模块modprobe ip_vsmodprobe ip_vs_wrr添加进/etc/rc.local开机自动加载modprobe ip_vsmodprobe ip_vs_wrr================描述hive&FAILED: Error in metadata: javax.jdo.JDOFatalInternalException: Unexpected exception caught.NestedThrowables:java.lang.reflect.InvocationTargetExceptionFAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask原因不知道解决方法&delete $HADOOP_HOME/build================描述WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.原因过时的类解决方法把所有配置文件中的EventCounter 替换成org.apache.hadoop.metrics.jvm.EventCounter。包括lib/hive-common-0.10.0.jar!/hive-log4j.properties。================描述java.io.IOException: File &filename& could only be replicated to 0 nodes, instead of 1原因datanode没有启动成功解决方法重新启动datanode,或者namenode 重新format。================描述hadoop 启动的时候 JAVA_HOME is not set and could not be found.原因xxx解决方法libexec/hadoop-config.sh 或者其他的脚本里面手动设置JAVA_HOME变量================描述hive&&FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClientFAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask原因驱动没加载成功数据库没有创建,并且url没有配置createDatabaseIfNotExist=true解决方法把mysql或者derby的驱动加到path里面去================描述eclipse cdt 启动的时候报错Failed to load the JNI shared library原因jdk的版本是64bit , 而eclipse是32bit。位数不一致。解决方法安装bit一致的jdk和eclipse================描述hive 使用mysql元数据库的时候 ,show tables 报错 Index column size too large. The maximum column size is 767 bytes.原因xxx解决方法将数据库的字符集改成latin1================描述hive 执行查询的时候,表明明存在,却报错Table not found原因xxx解决方法表名字前面加上库名================描述xxx原因xxx解决方法xxx
阅读(21526)|
用微信&&“扫一扫”
将文章分享到朋友圈。
用易信&&“扫一扫”
将文章分享到朋友圈。
历史上的今天
在LOFTER的更多文章
loftPermalink:'',
id:'fks_',
blogTitle:'日常错误整理',
blogAbstract:'================描述maven 编译spark的时候[ERROR] Failed to execute goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on project spark-parent: Execution default of goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed: A required class was missing while executing org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process: Lorg/sonatype/plexus/build/incremental/BuildC',
blogTag:'日常错误整理',
blogUrl:'blog/static/5',
isPublished:1,
istop:false,
modifyTime:5,
publishTime:9,
permalink:'blog/static/5',
commentCount:1,
mainCommentCount:1,
recommendCount:0,
bsrk:-100,
publisherId:0,
recomBlogHome:false,
currentRecomBlog:false,
attachmentsFileIds:[],
groupInfo:{},
friendstatus:'none',
followstatus:'unFollow',
pubSucc:'',
visitorProvince:'',
visitorCity:'',
visitorNewUser:false,
postAddInfo:{},
mset:'000',
remindgoodnightblog:false,
isBlackVisitor:false,
isShowYodaoAd:false,
hostIntro:'',
hmcon:'1',
selfRecomBlogCount:'0',
lofter_single:''
{list a as x}
{if x.moveFrom=='wap'}
{elseif x.moveFrom=='iphone'}
{elseif x.moveFrom=='android'}
{elseif x.moveFrom=='mobile'}
${a.selfIntro|escape}{if great260}${suplement}{/if}
{list a as x}
推荐过这篇日志的人:
{list a as x}
{if !!b&&b.length>0}
他们还推荐了:
{list b as y}
转载记录:
{list d as x}
{list a as x}
{list a as x}
{list a as x}
{list a as x}
{if x_index>4}{break}{/if}
${fn2(x.publishTime,'yyyy-MM-dd HH:mm:ss')}
{list a as x}
{if !!(blogDetail.preBlogPermalink)}
{if !!(blogDetail.nextBlogPermalink)}
{list a as x}
{if defined('newslist')&&newslist.length>0}
{list newslist as x}
{if x_index>7}{break}{/if}
{list a as x}
{var first_option =}
{list x.voteDetailList as voteToOption}
{if voteToOption==1}
{if first_option==false},{/if}&&“${b[voteToOption_index]}”&&
{if (x.role!="-1") },“我是${c[x.role]}”&&{/if}
&&&&&&&&${fn1(x.voteTime)}
{if x.userName==''}{/if}
网易公司版权所有&&
{list x.l as y}
{if defined('wl')}
{list wl as x}{/list}}

我要回帖

更多关于 stream.length 报错 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信