# general window.title=Corpus analyzer hyperlink.help=Help button.language=SL button.computeNgrams=Calculate button.cancel=Cancel # template tab.corpusTab=Corpus tab.filterTab=Filter tab.characterLevelTabNew=Characters tab.wordLevelTab=Word parts tab.oneWordAnalysisTab=Words tab.stringLevelTabNew2=Word sets # corpus tab label.setCorpusLocation=Set corpus location button.setCorpusLocation=Set location label.readHeaderInfo=Read info from headers checkBox.readHeaderInfo= label.chooseResultsLocation=Choose result location button.chooseResultsLocation=Set location label.selectReader=Select reader label.outputName=Output file name label.corpusTab.chooseCorpusLocationH=Select the folder which contains the corpus. The folder should only contain one corpus and should not contain files that are not part of the corpus. label.corpusTab.readHeaderInfoH=If you select this option, the taxonomy will be read separately. This might take a while. label.corpusTab.chooseResultsLocationH=Choose result location label.corpusTab.selectReaderH=Select reader label.corpusTab.outputNameH=Output file name # character analysis tab label.stringLength=Number of characters label.calculateFor=Calculate for label.displayTaxonomy=Display taxonomies label.dataLimit=Data limitations label.msd=Morphosyntactic tag label.taxonomy=Filter by taxonomy label.minimalOccurrences=Min. nr. occurrences label.minimalTaxonomy=Min. nr. tax. branches label.taxonomySetOperation=Filtriraj taksonomijo po label.solarFilters=Selected filters: string.lemma=lemma string.word=word label.letter.stringLengthH=Enter the length of character strings. label.letter.calculateForH=Character strings will be counted in the selected units. label.letter.displayTaxonomyH=The output will also contain the distribution of character strings across the corpus taxonomy. label.letter.msdH=Character strings will be counted only in words with the provided tag. label.letter.taxonomyH=Character strings will be counted only in selected text types. label.letter.minimalOccurrencesH=Character strings with fewer occurrences will not be included in the output. label.letter.minimalTaxonomyH=Character strings that occur in fewer taxonomy branches will not be included in the output. label.letter.taxonomySetOperationH=Izpisuj iz besedil, ki ustrezajo vsaj eni od izbranih vej (unija) ali vsem izbranim vejam (presek) # word part tab label.alsoVisualize=Also split by label.lengthSearch=Search for word parts of a specified length label.prefixLength=Length of initial part label.suffixLength=Length of final part label.listSearch=Search for word parts with a specified list label.prefixList=List of initial parts label.suffixList=List of final parts label.wordPart.calculateForH=Word parts will be counted in the selected units. label.wordPart.alsoVisualizeH=The output will also include the selected data. label.wordPart.displayTaxonomyH=The output will also contain the distribution of word parts across the corpus taxonomy. label.wordPart.prefixLengthH=Specify the length (in number of characters) of the initial word part. label.wordPart.suffixLengthH=Specify the length (in number of characters) of the final word part. label.wordPart.prefixListH=Separate the word parts with a semicolon (e.g. out; over) label.wordPart.suffixListH=Separate the word parts with a semicolon (e.g. ation; ness). label.wordPart.msdH=Word parts will only be counted in words with the specified tag. label.wordPart.taxonomyH=Word parts will only be counted in the selected text types. label.wordPart.minimalOccurrencesH=Units with the specified word part that occur fewer times will not be included in the output. label.wordPart.minimalTaxonomyH=Units with the specified word part that are present in fewer taxonomy branches will not be included in the output. # word tab label.writeMsdAtTheEnd=Split the morphosyntactic tag label.word.calculateForH=Specify what the program should treat as the main unit for the output. label.word.alsoVisualizeH=The output will also contain the selected data. label.word.displayTaxonomyH=The output will also contain the distribution of units across the corpus taxonomy. label.word.writeMsdAtTheEndH=The output will also include individual parts of morphosyntactic tags. label.word.msdH=Only words with the specified tag will be counted. label.word.taxonomyH=Only words in the selected text types will be counted. label.word.minimalOccurrencesH=Words with fewer occurrences will not be included in the output. label.word.minimalTaxonomyH=Words that occur in fewer taxonomy branches will not be included in the output. # word sets tab label.wordSet.calculateForH=Specify the units from which word sets will be extracted. label.wordSet.alsoVisualizeH=The output will also include the selected data. label.wordSet.displayTaxonomyH=The output will also contain the distribution of word sets across the corpus taxonomy. label.wordSet.skipValueH=Enter the maximum number of words that can appear between two words in a word set. label.wordSet.ngramValueH=The program will extract word sets with the specified number of tokens. label.wordSet.notePunctuationsH=Word sets will include punctuation. label.wordSet.collocabilityH=The program will also calculate collocability measures between words within the word set. label.wordSet.msdH=The program will only count word sets with the specified tag. label.wordSet.taxonomyH=Word sets will only be extracted from the selected taxonomy branches. label.wordSet.minimalOccurrencesH=Word sets with fewer occurrences will not be included in the output. label.wordSet.minimalTaxonomyH=Word sets that occur in fewer taxonomy branches will not be included in the output. # calculate for calculateFor.WORD=word calculateFor.NORMALIZED_WORD=normalized word calculateFor.LEMMA=lemma calculateFor.MORPHOSYNTACTIC_SPECS=morphosyntactic tag calculateFor.MORPHOSYNTACTIC_PROPERTY=morphosyntactic property calculateFor.WORD_TYPE=word type calculateFor.DIST_WORDS=word calculateFor.DIST_LEMMAS=lemma # n-grams label.skipValue=Skip value label.slowSpeedWarning=WARNING! USING THE ABOVE FILTER MAY DECREASE PROCESSING SPEED! label.ngramValue=N-gram length label.notePunctuations=Include punctuation label.collocability=Collocability # taxonomy set operations taxonomySetOperation.UNION=union taxonomySetOperation.INTERSECTION=intersection # filtersSolar filter.solarRegijaL=Region filter.solarPredmetL=Subject filter.solarRazredL=Class filter.solarLetoL=Year filter.solarSolaL=School filter.solarVrstaBesedilaL=Text type filter.solarRegija=region filter.solarPredmet=subject filter.solarRazred=class filter.solarLeto=year filter.solarSola=school filter.solarVrstaBesedila=type # messages message.WARNING_CORPUS_NOT_FOUND=No suitable corpus files have been found in the selected directory. message.WARNING_RESULTS_DIR_NOT_VALID=You do not have permission to access the selected directory. message.WARNING_DIFFERING_NGRAM_LEVEL_AND_FILTER_TOKENS=The specified n-gram length and number of words do not match. message.WARNING_DIFFERING_NGRAM_LEVEL_AND_FILTER_TOKENS_INFO=Choose another number or modify the filter. message.WARNING_WORD_OR_LEMMA=Specify if you want to calculate statistics for words or lemmas. message.WARNING_ONLY_NUMBERS_ALLOWED=Please enter a valid number. message.WARNING_NUMBER_TOO_BIG=The entered number is larger than the number of taxonomy branches. message.WARNING_MISMATCHED_NGRAM_AND_TOKENS_VALUES=The number for n-grams (%d) and number of tags included (%d) must match. message.WARNING_MISSING_STRING_LENGTH=String length must be higher than 0. Length is set to default value (1). message.WARNING_NO_TAXONOMY_FOUND=The program was unable to read the taxonomy from the corpus files. Please select another directory or a different corpus. message.WARNING_NO_SOLAR_FILTERS_FOUND=The program was unable to read the filters from corpus files. Please select another location or a different corpus. message.ERROR_WHILE_EXECUTING=An error occurred during program execution. message.ERROR_WHILE_SAVING_RESULTS_TO_CSV=An error occurred while saving results. message.ERROR_NOT_ENOUGH_MEMORY=Your memory is insufficient for analyzing such a large amount of data. message.ERROR_NO_REGI_FILE_FOUND=Missing file \"%s\". message.MISSING_NGRAM_LEVEL=N-gram level message.MISSING_CALCULATE_FOR=Calculate for message.MISSING_SKIP="" message.MISSING_STRING_LENGTH=String length message.MISMATCHED_STRING_LENGTH_AND_MSD_REGEX=String length and regex filter do not match. message.NOTIFICATION_FOUND_X_FILES=Nr. of found files: %s message.NOTIFICATION_CORPUS=Corpus: %s message.NOTIFICATION_ANALYSIS_COMPLETED=Analysis complete. The results have been saved successfully. message.NOTIFICATION_ANALYSIS_COMPLETED_NO_RESULTS=Analysis complete, but it was not possible to calculate statistics to match all the specified conditions. message.RESULTS_PATH_SET_TO_DEFAULT=Save location is set to corpus location. message.NOTIFICATION_ANALYSIS_CANCELED=The analysis was canceled. message.ONGOING_NOTIFICATION_ANALYZING_FILE_X_OF_Y=Analyzing file %d of %d (%s) - Estimated time remaining %d s message.CANCELING_NOTIFICATION=Canceled message.LABEL_CORPUS_LOCATION_NOT_SET=Corpus location is not set. message.LABEL_RESULTS_LOCATION_NOT_SET=Result location is not set. message.LABEL_RESULTS_CORPUS_TYPE_NOT_SET=Corpus type is not set. message.LABEL_SCANNING_CORPUS=Searching for and analyzing corpus files... message.LABEL_SCANNING_SINGLE_FILE_CORPUS=Input analysis message.COMPLETED=Completed #message.TOOLTIP_chooseCorpusLocationB=Select folder which contains corpus. The folder should only contain one corpus and should not contain too many files that are not part of corpus. #message.TOOLTIP_readHeaderInfoChB=If you select this option taxonomy will be read separately. This might take a while. message.TOOLTIP_readNotePunctuationsChB=The punctuation in sentences is included in the analysis. message.TOOLTIP_readDisplayTaxonomyChB=The output file will include the distribution across the taxonomy branches. windowTitles.error=Error windowTitles.warning=Warning windowTitles.confirmation=Confirmation # export header translations exportHeader.corpus=Corpus: exportHeader.date=Date: exportHeader.executionTime=Execution time: exportHeader.analysis=Analysis: exportHeader.analysis.letters=characters exportHeader.analysis.wordParts=word parts exportHeader.analysis.words=words exportHeader.analysis.wordSets=word sets exportHeader.numberLetters=Number of characters: exportHeader.calculateFor=Calculate for: exportHeader.alsoFilter=Also split by: exportHeader.displayTaxonomies=Display taxonomy branches: exportHeader.ngramLevel=N-gram level: exportHeader.skipValue=Skip value: exportHeader.notePunctuations=Include punctuation: exportHeader.collocability=Collocability: exportHeader.writeMSDAtTheEnd=Write tag at the end: exportHeader.prefixLength=Initial part length: exportHeader.suffixLength=Final part length: exportHeader.prefixList=Initial part list: exportHeader.suffixList=Final part list: exportHeader.msd=Morphosyntactic tag: exportHeader.taxonomy=Filter by taxonomy: exportHeader.minOccurrences=Min. nr. occurrences: exportHeader.minTaxonomies=Min. nr. taxonomy branches: exportHeader.additionalFilters=Additional filters: exportHeader.yes=yes exportHeader.no=no exportHeader.taxonomySetOperation=Filter taxonomy by: # export table header translations exportTable.skippedWords=Skipped words exportTable.lettersSmall=Characters (lower case) exportTable.wordsSmall=Lemma (lower case) exportTable.wordBeginning=Initial part of the word exportTable.wordEnding=Final part of the word exportTable.wordRest=The rest of the word exportTable.totalRelativeFrequency=Total relative frequency (over one million occurrences) exportTable.absoluteFrequency=Absolute frequency exportTable.percentage=Share exportTable.relativeFrequency=Relative frequency exportTable.msd=msd # parts exportTable.part.word=words: exportTable.part.normalizedWord=normalized words: exportTable.part.lemma=lemmas: exportTable.part.msd=msd: exportTable.part.msdProperty=msd property: exportTable.part.wordType=word type: exportTable.part.letterSet=character set exportTable.part.word2=word exportTable.part.normalizedWord2=normalized word exportTable.part.lemma2=lemma exportTable.part.msd2=msd exportTable.part.msdProperty2=msd property exportTable.part.wordType2=word type exportTable.part.letterSet2=Share of total sum of all letter sets exportTable.part.letterSet3=Letter set exportTable.part.word3=Word exportTable.part.normalizedWord3=Normalized word exportTable.part.lemma3=Lemma exportTable.part.msd3=Msd exportTable.part.msdProperty3=Msd property exportTable.part.wordType3=Word type exportTable.part.set=set exportTable.part.share=Absolute share of exportTable.part.absoluteFrequency=Absolute frequency of exportTable.part.totalFound=Total sum of all exportTable.part.totalFoundLetters=Total sum of all found letters of exportTable.part.totalSumString=Total sum of exportTable.part.totalSumLetters=Total sum of all letters of # generated files names exportFileName.letters=letters exportFileName.wordParts=word-parts exportFileName.words=words exportFileName.wordSets=word-sets exportFileName.gram=-gram exportFileName.skip=-skip