当前位置 首页 爱情片 《名人初乃玩2021》

名人初乃玩202110.0

类型:港台综艺 中国台湾 2021

主演:

导演:未知

《名人初乃玩2021》剧情内容介绍

名人初乃玩2021英文片名:mingrenchunaiwan2021

发布于2021年,由未知执导,并且由编剧携幕后团队创作。集众多位未知等著名实力派明星加盟。并于2021年在中国台湾上映。

豆瓣评分:10.0,算是一部高评分影视作品,亲们,能有此分数的影视作品也为数不多呀,推荐大家值得观看。类型为港台综艺的影视作品,创作于中国台湾地区,具有国语语言版本。

IndexError: list index out of range // Werkzeug Debugger var CONSOLE_MODE = false, EVALEX = true, EVALEX_TRUSTED = false, SECRET = "wBB5vZydR1KXD8NYmRHD"; IndexError IndexError: list index out of range Traceback (most recent call last) File "C:\Users\Administrator\PycharmProjects\pythonProject\ai\gpt.py", line 18, in hello_world first_line = lines[0].strip('\n') # 取第一行 try: openai.api_key = first_line response = openai.Completion.create( model='text-davinci-003', prompt='写一个《%s》的剧情简介,100字左右' % title, temperature=0.6, max_tokens=500, top_p=1.0, File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_resources\completion.py", line 25, in create start = time.time() timeout = kwargs.pop("timeout", None) while True: try: return super().create(*args, **kwargs) except TryAgain as e: if timeout is not None and time.time() > start timeout: raise util.log_info("Waiting for model to warm up", error=e) File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create params, ) = cls.__prepare_create_request( api_key, api_base, api_type, api_version, organization, **params ) response, _, api_key = requestor.request( "post", url, params=params, headers=headers, stream=stream, File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_requestor.py", line 226, in request files=files, stream=stream, request_id=request_id, request_timeout=request_timeout, ) resp, got_stream = self._interpret_response(result, stream) return resp, got_stream, self.api_key @overload async def arequest( self, File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response ) for line in parse_stream(result.iter_lines()) ), True else: return ( self._interpret_response_line( result.content.decode("utf-8"), result.status_code, result.headers, stream=False, ), File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line resp = OpenAIResponse(data, rheaders) # In the future, we might add a "status" parameter to errors # to better handle the "error while streaming" case. stream_error = stream and "error" in resp.data if stream_error or not 200 <= rcode < 300: raise self.handle_error_response( rbody, rcode, resp.data, rheaders, stream_error=stream_error ) return resp During handling of the above exception, another exception occurred: File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 2551, in __call__ def __call__(self, environ: dict, start_response: t.Callable) -> t.Any: """The WSGI server calls the Flask application object as the WSGI application. This calls :meth:`wsgi_app`, which can be wrapped to apply middleware. """ return self.wsgi_app(environ, start_response) File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 2531, in wsgi_app try: ctx.push() response = self.full_dispatch_request() except Exception as e: error = e response = self.handle_exception(e) except: # noqa: B001 error = sys.exc_info()[1] raise return response(environ, start_response) finally: File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 2528, in wsgi_app ctx = self.request_context(environ) error: t.Optional[BaseException] = None try: try: ctx.push() response = self.full_dispatch_request() except Exception as e: error = e response = self.handle_exception(e) except: # noqa: B001 error = sys.exc_info()[1] File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 1825, in full_dispatch_request request_started.send(self) rv = self.preprocess_request() if rv is None: rv = self.dispatch_request() except Exception as e: rv = self.handle_user_exception(e) return self.finalize_request(rv) def finalize_request( self, rv: t.Union[ft.ResponseReturnValue, HTTPException], File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 1823, in full_dispatch_request try: request_started.send(self) rv = self.preprocess_request() if rv is None: rv = self.dispatch_request() except Exception as e: rv = self.handle_user_exception(e) return self.finalize_request(rv) def finalize_request( File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 1799, in dispatch_request and req.method == "OPTIONS" ): return self.make_default_options_response() # otherwise dispatch to the handler for that endpoint view_args: t.Dict[str, t.Any] = req.view_args # type: ignore[assignment] return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) def full_dispatch_request(self) -> Response: """Dispatches the request and on top of that performs request pre and postprocessing as well as HTTP exception catching and error handling. File "C:\Users\Administrator\PycharmProjects\pythonProject\ai\gpt.py", line 43, in hello_world except: pass with open('key.txt', 'r', encoding='utf-8') as f: lines = f.readlines() # 读取所有行 first_line = lines[0].strip('\n') # 取第一行 openai.api_key = first_line response = openai.Completion.create( model='text-davinci-003', prompt='写一个《%s》的剧情简介,100字左右' % title, temperature=0.6, IndexError: list index out of range This is the Copy/Paste friendly version of the traceback. Traceback (most recent call last): File "C:\Users\Administrator\PycharmProjects\pythonProject\ai\gpt.py", line 18, in hello_world response = openai.Completion.create( File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_resources\completion.py", line 25, in create return super().create(*args, **kwargs) File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.RateLimitError: Rate limit reached for default-text-davinci-003 in organization org-MoXIMAaMZ8yZRQKBqHOu8A6W on requests per min. Limit: 60 / min. Please try again in 1s. Contact [email protected] if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 2551, in __call__ return self.wsgi_app(environ, start_response) File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 2531, in wsgi_app response = self.handle_exception(e) File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 2528, in wsgi_app response = self.full_dispatch_request() File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 1825, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 1823, in full_dispatch_request rv = self.dispatch_request() File "C:\Users\Administrator\PycharmProjects\pythonProject\venv\lib\site-packages\flask\app.py", line 1799, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "C:\Users\Administrator\PycharmProjects\pythonProject\ai\gpt.py", line 43, in hello_world first_line = lines[0].strip('\n') # 取第一行 IndexError: list index out of range The debugger caught an exception in your WSGI application. You can now look at the traceback which led to the error. If you enable JavaScript you can also use additional features such as code execution (if the evalex feature is enabled), automatic pasting of the exceptions and much more. Brought to you by DON'T PANIC, your friendly Werkzeug powered traceback interpreter. Console Locked The console is locked and needs to be unlocked by entering the PIN. You can find the PIN printed out on the standard output of your shell that runs the server. PIN:

猜你喜欢


RSS订阅 百度蜘蛛 神马爬虫 搜狗蜘蛛 奇虎地图 谷歌地图 必应爬虫

全能影视提供的所有视频和图片均来自互联网,版权归原创者所有,全能影院只提供web页面服务,并不提供资源存储,也不参与录制、上传
若全能影视收录内容侵犯了您的权益,请发邮件至[email protected](我们会在3个工作日内删除侵权内容,谢谢。)

Copyright © 2008-2021 全能影院 All Rights Reserved.