当前位置: 首页 > news >正文

2017响应式网站 全站网站被360拦截怎么办

2017响应式网站 全站,网站被360拦截怎么办,网站怎么设关键词,网站开发技术概况我在在之前的文章 “使用 Elasticsearch 检测抄袭 #xff08;一#xff09;” 介绍了如何检文章抄袭。这个在许多的实际使用中非常有意义。我在 CSDN 上的文章也经常被人引用或者抄袭。有的人甚至也不用指明出处。这对文章的作者来说是很不公平的。文章介绍的内容针对很多的… 我在在之前的文章 “使用 Elasticsearch 检测抄袭 一” 介绍了如何检文章抄袭。这个在许多的实际使用中非常有意义。我在  CSDN 上的文章也经常被人引用或者抄袭。有的人甚至也不用指明出处。这对文章的作者来说是很不公平的。文章介绍的内容针对很多的博客网站也非常有意义。在那篇文章中我觉得针对一些开发者来说不一定能运行的很好。在今天的这篇文章中我特意使用本地部署并使用 jupyter notebook 来进行一个展示。这样开发者能一步一步地完整地运行起来。 安装 安装 Elasticsearch 及 Kibana 如果你还没有安装好自己的 Elasticsearch 及 Kibana那么请参考一下的文章来进行安装 如何在 LinuxMacOS 及 Windows 上进行安装 Elasticsearch Kibana如何在 LinuxMacOS 及 Windows 上安装 Elastic 栈中的 Kibana 在安装的时候请选择 Elastic Stack 8.x 进行安装。在安装的时候我们可以看到如下的安装信息 为了能够上传向量模型我们必须订阅白金版或试用。 上传模型 注意如果我们在这里通过命令行来进行上传模型的话那么你就不需要在下面的代码中来实现上传。可以省去那些个步骤。 我们可以参考之前的文章 “Elasticsearch使用 NLP 问答模型与你喜欢的圣诞歌曲交谈”。我们使用如下的命令来上传 OpenAI detection 模型 eland_import_hub_model --url https://elastic:o6G_pvRL8P*7ono6XHlocalhost:9200 \--hub-model-id roberta-base-openai-detector \--task-type text_classification \--ca-cert /Users/liuxg/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt \--start 在上面我们需要根据自己的配置修改上面的证书路径Elasticsearch 的访问地址。 我们可以在 Kibana 中查看最新上传的模型 接下来按照同样的方法我们安装文本嵌入模型。 eland_import_hub_model --url https://elastic:o6G_pvRL8P*7ono6XHlocalhost:9200 \--hub-model-id sentence-transformers/all-mpnet-base-v2 \--task-type text_embedding \--ca-cert /Users/liuxg/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt \--start 为了方便大家学习我们可以在如下的地址下载代码 git clone https://github.com/liu-xiao-guo/elasticsearch-labs 我们可以在如下的位置找到 jupyter notebook $ pwd /Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch $ ls plagiarism_detection_es_self_managed.ipynb 运行代码 接下来我们开始运行 notebook。我们首先安装相应的 python 包 pip3 install elasticsearch8.11 pip3 -q install eland elasticsearch sentence_transformers transformers torch2.1.0 在运行代码之前我们先设置如下的变量 export ES_USERelastic export ES_PASSWORDo6G_pvRL8P*7ono6XH export ES_ENDPOINTlocalhost 我们还需要把 Elasticsearch 的证书拷贝到当前的目录中 $ pwd /Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch $ cp ~/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt . $ ls http_ca.crt plagiarism_detection_es_self_managed.ipynb plagiarism_detection_es.ipynb 导入包 from elasticsearch import Elasticsearch, helpers from elasticsearch.client import MlClient from eland.ml.pytorch import PyTorchModel from eland.ml.pytorch.transformers import TransformerModel from urllib.request import urlopen import json from pathlib import Path import os 连接到 Elasticsearch elastic_useros.getenv(ES_USER) elastic_passwordos.getenv(ES_PASSWORD) elastic_endpointos.getenv(ES_ENDPOINT)url fhttps://{elastic_user}:{elastic_password}{elastic_endpoint}:9200 client Elasticsearch(url, ca_certs ./http_ca.crt, verify_certs True)print(client.info()) 上传 detector 模型 hf_model_id roberta-base-openai-detector tm TransformerModel(model_idhf_model_id, task_typetext_classification)#set the modelID as it is named in Elasticsearch es_model_id tm.elasticsearch_model_id()# Download the model from Hugging Face tmp_path models Path(tmp_path).mkdir(parentsTrue, exist_okTrue) model_path, config, vocab_path tm.save(tmp_path)# Load the model into Elasticsearch ptm PyTorchModel(client, es_model_id) ptm.import_model(model_pathmodel_path, config_pathNone, vocab_pathvocab_path, configconfig)#Start the model s MlClient.start_trained_model_deployment(client, model_ides_model_id) s.body 我们可以在 Kibana 中进行查看 上传 text embedding 模型 hf_model_idsentence-transformers/all-mpnet-base-v2 tm TransformerModel(model_idhf_model_id, task_typetext_embedding)#set the modelID as it is named in Elasticsearch es_model_id tm.elasticsearch_model_id()# Download the model from Hugging Face tmp_path models Path(tmp_path).mkdir(parentsTrue, exist_okTrue) model_path, config, vocab_path tm.save(tmp_path)# Load the model into Elasticsearch ptm PyTorchModel(client, es_model_id) ptm.import_model(model_pathmodel_path, config_pathNone, vocab_pathvocab_path, configconfig)# Start the model s MlClient.start_trained_model_deployment(client, model_ides_model_id) s.body 我们可以在 Kibana 中查看 创建源索引 client.indices.create( indexplagiarism-docs, mappings {properties: {title: {type: text,fields: {keyword: {type: keyword}}},abstract: {type: text,fields: {keyword: {type: keyword}}},url: {type: keyword},venue: {type: keyword},year: {type: keyword}} }) 我们可以在 Kibana 中进行查看 创建 checker ingest pipeline client.ingest.put_pipeline(idplagiarism-checker-pipeline,processors [{inference: { #for ml models - to infer against the data that is being ingested in the pipelinemodel_id: roberta-base-openai-detector, #text classification model idtarget_field: openai-detector, # Target field for the inference resultsfield_map: { #Maps the document field names to the known field names of the model.abstract: text_field # Field matching our configured trained model input. Typically for NLP models, the field name is text_field.}}},{inference: {model_id: sentence-transformers__all-mpnet-base-v2, #text embedding model model idtarget_field: abstract_vector, # Target field for the inference resultsfield_map: { #Maps the document field names to the known field names of the model.abstract: text_field # Field matching our configured trained model input. Typically for NLP models, the field name is text_field.}}}] ) 我们可以在 Kibana 中进行查看 创建 plagiarism checker 索引 client.indices.create( indexplagiarism-checker, mappings{ properties: {title: {type: text,fields: {keyword: {type: keyword}}},abstract: {type: text,fields: {keyword: {type: keyword}}},url: {type: keyword},venue: {type: keyword},year: {type: keyword},abstract_vector.predicted_value: { # Inference results field, target_field.predicted_valuetype: dense_vector,dims: 768, # embedding_sizeindex: true,similarity: dot_product # When indexing vectors for approximate kNN search, you need to specify the similarity function for comparing the vectors.}} } ) 我们可以在 Kibana 中进行查看 写入源文档 我们首先把地址 https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/datasets/emnlp2016-2018.json 里的文档下载到当前目录下 $ pwd /Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch $ ls emnlp2016-2018.json plagiarism_detection_es.ipynb http_ca.crt plagiarism_detection_es_self_managed.ipynb models 如上所示emnlp2016-2018.json  就是我们下载的文档。 # Load data into a JSON object with open(emnlp2016-2018.json) as f:data_json json.load(f)print(fSuccessfully loaded {len(data_json)} documents)def create_index_body(doc): Generate the body for an Elasticsearch document. return {_index: plagiarism-docs,_source: doc,}# Prepare the documents to be indexed documents [create_index_body(doc) for doc in data_json]# Use helpers.bulk to index helpers.bulk(client, documents)print(Done indexing documents into plagiarism-docs source index) 我们可以在 Kibana 中进行查看 使用 ingest pipeline 进行 reindex client.reindex(wait_for_completionFalse,source{index: plagiarism-docs},dest {index: plagiarism-checker,pipeline: plagiarism-checker-pipeline} ) 在上面我们设置 wait_for_completionFalse。这是一个异步的操作。我们需要等一段时间让上面的 reindex 完成。我们可以通过检查如下的文档数 上面表明我们的文档已经完成。我们再接着查看一下 plagiarism-checker 索引中的文档 检查重复文字 direct plagarism model_text Understanding and reasoning about cooking recipes is a fruitful research direction towards enabling machines to interpret procedural text. In this work, we introduce RecipeQA, a dataset for multimodal comprehension of cooking recipes. It comprises of approximately 20K instructional recipes with multiple modalities such as titles, descriptions and aligned set of images. With over 36K automatically generated question-answer pairs, we design a set of comprehension and reasoning tasks that require joint understanding of images and text, capturing the temporal flow of events and making sense of procedural knowledge. Our preliminary results indicate that RecipeQA will serve as a challenging test bed and an ideal benchmark for evaluating machine comprehension systems. The data and leaderboard are available at http://hucvl.github.io/recipeqa.response client.search(indexplagiarism-checker, size1,knn{field: abstract_vector.predicted_value,k: 9,num_candidates: 974,query_vector_builder: {text_embedding: {model_id: sentence-transformers__all-mpnet-base-v2,model_text: model_text}}} )for hit in response[hits][hits]:score hit[_score]title hit[_source][title]abstract hit[_source][abstract]openai hit[_source][openai-detector][predicted_value]url hit[_source][url]if score 0.9:print(f\nHigh similarity detected! This might be plagiarism.)print(f\nMost similar document: {title}\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n)if openai Fake:print(This document may have been created by AI.\n)elif score 0.7:print(f\nLow similarity detected. This might not be plagiarism.)if openai Fake:print(This document may have been created by AI.\n)else:print(f\nModerate similarity detected.)print(f\nMost similar document: {title}\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n)if openai Fake:print(This document may have been created by AI.\n)ml_client MlClient(client)model_id roberta-base-openai-detector #open ai text classification modeldocument [{text_field: model_text} ]ml_response ml_client.infer_trained_model(model_idmodel_id, docsdocument)predicted_value ml_response[inference_results][0][predicted_value]if predicted_value Fake:print(Note: The text query you entered may have been generated by AI.\n) similar text - paraphrase plagiarism model_text Comprehending and deducing information from culinary instructions represents a promising avenue for research aimed at empowering artificial intelligence to decipher step-by-step text. In this study, we present CuisineInquiry, a database for the multifaceted understanding of cooking guidelines. It encompasses a substantial number of informative recipes featuring various elements such as headings, explanations, and a matched assortment of visuals. Utilizing an extensive set of automatically crafted question-answer pairings, we formulate a series of tasks focusing on understanding and logic that necessitate a combined interpretation of visuals and written content. This involves capturing the sequential progression of events and extracting meaning from procedural expertise. Our initial findings suggest that CuisineInquiry is poised to function as a demanding experimental platform.response client.search(indexplagiarism-checker, size1,knn{field: abstract_vector.predicted_value,k: 9,num_candidates: 974,query_vector_builder: {text_embedding: {model_id: sentence-transformers__all-mpnet-base-v2,model_text: model_text}}} )for hit in response[hits][hits]:score hit[_score]title hit[_source][title]abstract hit[_source][abstract]openai hit[_source][openai-detector][predicted_value]url hit[_source][url]if score 0.9:print(f\nHigh similarity detected! This might be plagiarism.)print(f\nMost similar document: {title}\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n)if openai Fake:print(This document may have been created by AI.\n)elif score 0.7:print(f\nLow similarity detected. This might not be plagiarism.)if openai Fake:print(This document may have been created by AI.\n)else:print(f\nModerate similarity detected.)print(f\nMost similar document: {title}\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n)if openai Fake:print(This document may have been created by AI.\n)ml_client MlClient(client)model_id roberta-base-openai-detector #open ai text classification modeldocument [{text_field: model_text} ]ml_response ml_client.infer_trained_model(model_idmodel_id, docsdocument)predicted_value ml_response[inference_results][0][predicted_value]if predicted_value Fake:print(Note: The text query you entered may have been generated by AI.\n) 完整的代码可以在地址下载https://github.com/liu-xiao-guo/elasticsearch-labs/blob/main/supporting-blog-content/plagiarism-detection-with-elasticsearch/plagiarism_detection_es_self_managed.ipynb
http://www.yingshimen.cn/news/105237/

相关文章:

  • 自己制作一个网站需要什么软件网站建设果麦科技
  • 数据分析网站怎么做专业网站排名优化公司
  • 建设厅网站生成案卷生成不了wordpress的pjax主题
  • 迪士尼网站是谁做的企业做一个网站的费用
  • 网站做seo深业资本有限公司网站建设
  • wordpress建站工具百度移动端关键词优化
  • 注册公司网站怎么做没有备案的网站怎么访问不了
  • 铜川建设网站江西房地产网站建设
  • 四川营销型网站建设品牌推广外包公司
  • 网站怎么做全站搜索企业管理咨询有限公司经营范围
  • 网站数据分析报表网站怎么做构成
  • 万网网站根目录wordpress页面加密
  • 太原网站制作哪里便宜粤icp备网站建设 中企动力广州
  • 做问卷赚钱最好似网站网页版征信报告查询
  • 公司网站实用性北京好的设计公司
  • 专门做品牌折扣的网站广州网站定制开发
  • 坪山网站建设哪家便宜织梦转WordPress插件
  • 专业深圳网站定制开发西安做网站比较好的公司
  • 中国公路建设行业协会网站这么上不住建局人员名单
  • 做网站常用软件建设厅教育培训网站
  • 国内做的比较好的协会网站百度做网站的公司
  • 优质的专业网站建设网络推广方案下拉管家微xiala11
  • 效果图参考网站东莞做网站 汇卓
  • 凤岗仿做网站哈尔滨网站建设有限公司
  • 人力资源网站怎么做黑色网站配色
  • 成都装修网站建设多少钱网站二维码特效
  • 厚街镇网站建设wordpress下载中
  • 网站备案后改域名wordpress 花生壳
  • 推广软件的网站住房和城乡建设部标准定额网站
  • 网站开发房源岗位苏州微网站建设